Next Article in Journal / Special Issue
UAV Thrust Model Identification Using Spectrogram Analysis
Previous Article in Journal / Special Issue
Improving Automatic Warehouse Throughput by Optimizing Task Allocation and Validating the Algorithm in a Developed Simulation Tool
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters

by
Jorge Antonio Sarapura
,
Flavio Roberti
and
Ricardo Carelli
*
INAUT—Instituto de Automática—CONICET, Universidad Nacional de San Juan, San Juan 5400, Argentina
*
Author to whom correspondence should be addressed.
Automation 2021, 2(3), 127-140; https://doi.org/10.3390/automation2030008
Submission received: 28 May 2021 / Revised: 16 July 2021 / Accepted: 21 July 2021 / Published: 27 July 2021
(This article belongs to the Collection Smart Robotics for Automation)

Abstract

:
In the present work, we develop an adaptive dynamic controller based on monocular vision for the tracking of objects with a three-degrees of freedom (DOF) Scara robot manipulator. The main characteristic of the proposed control scheme is that it considers the robot dynamics, the depth of the moving object, and the mounting of the fixed camera to be unknown. The design of the control algorithm is based on an adaptive kinematic visual servo controller whose objective is the tracking of moving objects even with uncertainties in the parameters of the camera and its mounting. The design also includes a dynamic controller in cascade with the former one whose objective is to compensate the dynamics of the manipulator by generating the final control actions to the robot even with uncertainties in the parameters of its dynamic model. Using Lyapunov’s theory, we analyze the two proposed adaptive controllers for stability properties, and, through simulations, the performance of the complete control scheme is shown.

1. Introduction

Currently, research in the field of industrial robot control is focused on adding external sensors combined with advanced control strategies so that robots can work in unknown or semi-structured environments, thus, increasing the field of applications. Among the most widely used external sensors, vision sensors provide rich information about the working space. For this reason, vision-based control systems applied to robotics have been extensively studied in recent years.
Servo-visual control systems can be classified into image- or position-based systems according to how the control errors are defined, and in handheld or fixed camera system depending on the location of the vision camera with respect to the robot [1]. Furthermore, these control systems can be dynamic [2] or kinematic [3] depending on whether or not the dynamics of the robot are considered in its design.
Although the parameters of the models of these control systems can be obtained with sufficient precision through calibrations or identification techniques, there will always be uncertainties due to assembly errors, variations in the load handled, etc. To deal with these uncertainties, various adaptive robot controllers have been proposed. In [4], uncertainties in the kinematic parameters of a handheld camera system were considered, without demonstrations of the stability of the system. In [2], only the uncertainties in the parameters of the vision system were dealt with, and local stability results were presented. On the other hand, in [5], a precise knowledge of the kinematics of the robot and the vision system was assumed, considering only uncertainties in the robot’s dynamics.
The authors in [6,7,8] presented adaptive controllers with uncertainties in the vision system with a proof of global convergence to zero of the control errors for positioning tasks and only with bounded errors for the following tasks. These works did not consider the dynamics of the robot. Other works, such as [9,10], considered uncertainties in both the camera and robot parameters for a fixed camera system: the first with the assumption of needing an explicit measurement of the speed of the end of the robot in the image, and the second with a similar approach to the first but that avoided this measurement.
Both works dealt separately with the problem of adapting the parameters of the camera and the robot with a cascade structure and with a complete stability test of the system. Although the design of the adaptive robot controller followed a classical structure, it was based on a model with torque and force inputs, which is not the case with a real industrial robot. The design of the servo-visual controller was complex, and the simulation results did not effectively show a convergence to zero of the control errors. In the above mentioned works, the controllers were designed for planar or 2-D robots.
Currently, few works have considered adaptive servo-visual control for a 3-D robot. In [11], an adaptive dynamic controller designed with backstepping techniques was presented that considered uncertainties in both the camera and robot parameters in a unified structure and allowed the tracking of objects in an asymptotic way. However, this was achieved due to the use of two fixed cameras mounted on perpendicular planes. In [12], the author proposed an adaptive kinematic controller based on linearization techniques using a fixed camera. He considered two decoupled systems for the controller design, one for depth control and one for 2-D displacement control.
In [13], an adaptive visual servo controller for trajectory tracking was presented using a calibrated Kinect camera, which acted as a dense stereo sensor, and a controller based only on the inverse kinematic model of the manipulator with a position-based control law. The authors did not provide a stability analysis of the proposed system, and the performance of the controller was verified through experimental results considering only the Root Mean Squared Error (RMSE) of the position of the robot end effector as representative of the controller’s precision.
In [14], an adaptive control approach was presented considering the unknown robot’s kinematics and dynamics. The system used a calibrated camera to identify and calculate the Cartesian position of the robot’s operating end, on which an unknown tool was mounted, in order to estimate the 3-D dimensional information of the tool through the kinematic observer. The adaptive controller was of the free–model type combined with a kinematic observer. The stability of the closed-loop system was demonstrated by Lyapunov but was strongly conditioned to the convergence of the kinematic observer, which necessarily required persistently exciting trajectories to converge. The performance was shown through simulations.
In the work [15], an adaptive controller for trajectory tracking by a 3-degrees of freedom (DOF) manipulator robot with a fixed camera configuration was presented, considering both the parameters of the camera as well as the dynamics of the manipulator and the dynamics of its electric actuators as unknown. They proposed a control system with two control laws based on the backstepping technique and with speed measurement in the image plane. The first control law set the armature current of the motors as an auxiliary control variable, and another control law was given to generate the voltage references to the motors as a final control action.
For the adaptation of the different parameters, eight adaptive laws were required, and they demonstrated the stability of the proposed closed–loop system proposed by Lyapunov’s theory, assuming that the estimated parameters did not cause a singularity in the estimation of the independent-depth image Jacobian matrix. The simulation results only showed the convergence to zero of the position and speed control errors in the image for a single circular path in the image plane, and they did not show the convergence of the estimated parameters or the auxiliary control variable and the articular positions of the manipulator during the task.
A similar approach was presented in [16]; however, they avoided measuring the velocity in the image plane by incorporating a kinematic observer in cascade with the adaptive controller. They showed that the image space tracking errors converged to zero using a depth-dependent quasi-Lyapunov function plus the Lyapunov-like standard function and the asymptotic convergence of the observation errors in the image space. However their simulation results only showed the convergence to zero of the position control errors for a single circular path in the image plane. Similar to the previous work, they did not show the convergence of the estimated parameters, and, in addition, their results showed that the depth estimation error did not converge.
In [17], a research work prior to the current one was presented, in which only a planar robot with two degrees of freedom was considered and where the unknown depth of the target was constant. In the present work, we propose an adaptive control system consisting of two cascaded controllers to control a 3-D robot. The first is an adaptive image based kinematic visual servo controller in a fixed camera setup, the aim of which is that the robot follows a desired 3-D Cartesian trajectory even without knowing the depth and relative orientation between the end of the robot and the camera.
The second controller is an adaptive dynamic controller, with joint velocity reference inputs from the first controller, which compensates for the dynamics of the manipulator even with uncertainties in its dynamic parameters. The designed control system considers the real dynamics of a Scara 3-D industrial manipulator robot, and the ultimately boundedness of the control errors of the entire system is demonstrated using Lyapunov’s theory. The performance is shown through representative simulations.

2. Robot Model

The kinematic and dynamic model of a Scara-type robot manipulator [18] is presented as follows.

2.1. Kinematic Model

The kinematic model of a Scara Bosch SR800 [18] manipulator with 3 degrees of freedom (DOF), can be written as:
x w ( q ) = t r w = x r w y r w z r w = l 2 cos ( q 1 + q 2 ) + l 1 cos ( q 1 ) l 2 sen ( q 1 + q 2 ) + l 1 sen ( q 1 ) h + q 3 ,
where t r w is the position of the end of the robot in the inertial frame w ; l 1 and l 2 are the lengths of the first two links; h is the maximum height of the operating end, and q = [ q 1 , q 2 , q 3 ] T is the vector of joint positions, see Figure 1. The transformation between the robot frame r and the inertial frame w is given by the vector t r w and the rotation matrix R r w equal to the identity matrix due to a mechanical compensation of the robot’s orientation:
g r w = R r w t r w 0 1 = I t r w 0 1 .

2.2. Dynamic Model

The 3-DOF dynamic model of the Bosch SR800 robot can be written as [18]
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ = q ˙ r e f ,
where M ( q ) is the inertia matrix, C ( q , q ˙ ) is the matrix of centripetal and Coriolis forces, q ˙ and q ¨ are the vectors of joint velocities and accelerations, and q ˙ r e f is the vector of reference speeds applied to the robot as a control action to its internal servos.
The model (3) can be rewritten in parameterized form as:
q ˙ r e f = Φ ( q , q ˙ , q ¨ ) X i n e r t
where Φ ( q , q ˙ , q ¨ ) is the regression matrix whose elements are functions of q , q ˙ , and q ¨ ; and X i n e r t is the vector of dynamic and actuator’s parameters of the Bosch SR800 robot, whose identified value is X i n e r t = [ 1.73 , 0.19 , 0.09 , 0.10 , 0.92 , 0.009 ] T .

3. Simulated Experimental Platform and Vision System

To validate the proposed control system, we used the model of an experimental platform, which is a real SCARA Bosch SR-800 industrial robot of 4 DOF, available in our workplace and which represents a common robotic structure referred to in many works [19].
The camera used in the experiments was a Drangonfly Express IEEE1394b camera. This camera is capable of acquiring and transmitting color images through the bus IEEE-1394b at a speed of 200 fps (frames per seconds) and with a resolution of 640 × 480 px (pixels). The camera should be mounted at a certain height over the manipulator robot in such a way that it allows for capturing the entire workspace. The images captured by the camera were processed in real time using functions from the OpenCV library to extract two characteristic points fixed to the end effector of the robot, which were used as image features in the visual servoing algorithm as explained below.
The modeling of the vision system provides the equations that relate the position of the end of the robot ( x w = t r w ) and the fixed distance d x w to a second point, displaced on the x axis of the robot frame r , in 3-D space with its corresponding projection on the image plane. Figure 1 shows the frames associated to the manipulator r , the camera c , and the 3-D space w and w 2 , to which the poses of the robot and the camera, respectively, are defined. The transformations between the inertial frames w and w 2 , and between the frames c and w 2 , are given by:
g w 2 w = I t w 2 w 0 1 g c w 2 = R c w 2 0 0 1 ,
where t w 2 w = [ x w 2 w , y w 2 w , z w 2 w ] T is the position vector (generally it is unknown) from w 2 expressed in w and R c w 2 = R z ( θ ) R i d e a l = ( R c w 2 ) 1 = R w 2 c with:
R z ( θ ) = cos ( θ ) sen ( θ ) 0 sen ( θ ) cos ( θ ) 0 0 0 1 , R i d e a l = 0 1 0 1 0 0 0 0 1 .
The 3-D point that represents the position of the robot is mapped to the point x 1 I of the camera image plane using:
λ z x ¯ 1 I = K Π o g w 2 c Π g w w 2 x ¯ w x ¯ w 2 ,
where x ¯ 1 I is the point x 1 I expressed in homogeneous coordinates, K and Π 0 are the matrices of intrinsic parameters of the camera and of perspective projection, x w 2 = t r w t w 2 w is the robot position expressed in w 2 , and λ z is an unknown scale factor. Operating algebraically, the expression (7) can be rewritten in non-homogeneous coordinates as:
x 1 I = α R z ( θ ) x w 2 + o I ,
where x w 2 = R i d e a l ( t r w t w 2 w ) and α = α z w 2 w ( h + q 3 ) is the unknown depth of the operating end in the image frame I , α is the scale factor, and o I is the coordinate vector of the center of the image (parameters of K ). Similarly, the second point x w + d on the end of the robot with d = [ d x w , 0 , 0 , 1 ] T is mapped to the point x 2 I :
λ z x ¯ 2 I = λ z x ¯ 1 I + K R w 2 c d .
The distance between the two points in the image plane is given by d I = α d x w . Then, the vector of image features is defined as:
x I = x 1 y 1 d I = x 1 I d I = α R z ( θ ) x w 2 + o I α d x w ,
and its time derivative is given by:
x ˙ I = α R z ( θ ) R i d e a l J ( q , t w 2 w , d x w ) q ˙ = α R z ( θ ) u = D u ,
where u = R i d e a l J ( q , t w 2 w , d x w ) q ˙ , J ( q , t w 2 w , d x w ) is the Jacobian of the robot, and D = α R z ( θ ) represents the generally unknown parameters of the vision system.

4. Adaptive Kinematic Servovisual Controller

The inverse model of the expression (11) is given by:
u = D 1 x ˙ I = p 1 p 2 0 p 2 p 1 0 0 0 p 3 x ˙ 1 y ˙ 1 d ˙ I ,
where p 1 = cos ( θ ) / α , p 2 = sen ( θ ) / α and p 3 = 1 / α .
x T D 1 x > 0 for all x 0 if | θ | < π 2 . Equation (12) can be expressed in the following ways:
u = p 1 0 0 0 p 1 0 0 0 p 3 x ˙ 1 y ˙ 1 d ˙ I + p 2 y ˙ 1 p 2 x ˙ 1 0 = P x ˙ I + η ,
u = x ˙ 1 y ˙ 1 0 y ˙ 1 x ˙ 1 0 0 0 d ˙ I p 1 p 2 p 3 = ϕ p .
Defining the control errors as x ˜ I = x I x d I , where x d I is the desired visual characteristic in the image plane. Two different control laws can be proposed as described below.

4.1. Control Law with Measurement of x ˙ I

The following adaptive control law is proposed with measurement of the speed of the image characteristics:
u c = P ^ ρ 1 ρ 2 ρ 3 + p ^ 2 y ˙ 1 p ^ 2 x ˙ 1 0 = ρ 1 y ˙ 1 0 ρ 2 x ˙ 1 0 0 0 ρ 3 p ^ 1 p ^ 2 p ^ 3 = ϕ c p ^ ,
where u c = R i d e a l J ( q , t w 2 w , d x w ) q ˙ r e f represents the robot’s control action in Cartesian coordinates and q ˙ r e f the joint velocity command sent to the robot. The vector p ^ represents the estimated parameters of the vision system for which the vector p ˜ = p ^ p was considered as the parameter error; the matrix ϕ c is composed with the elements of the vector x ˙ I and the vector ρ whose expression is:
ρ = ρ 1 ρ 2 ρ 3 = x ˙ d I λ x ˜ I .
where λ > 0 represents the gain of the controller.

Controller Analysis

Assuming perfect velocity tracking ( u = u c ), that is, q ˙ = q ˙ r e f , from the expressions (13) and (15), the closed-loop equation of the system is, thus, obtained:
P x ˙ I + η = ϕ c p ^ = ϕ c ( p + p ˜ ) = P ρ + η + ϕ c p ˜
P ( x ˜ ˙ I + λ x ˜ I ) = ϕ c p ˜ .
Then, the following Lyapunov candidate function is proposed:
V = 1 2 x ˜ I T P x ˜ I + 1 2 p ˜ T γ p ˜ ,
whose time derivative is:
V ˙ = λ x ˜ I T P x ˜ I + x ˜ I T ϕ c p ˜ + p ˜ T γ p ˜ ˙ .
This defines the adapting law:
p ˜ ˙ = γ 1 ϕ c T x ˜ I ,
where γ > 0 is an adaptation gain matrix. Replacing (20) in the expression (19), we obtain:
V ˙ = λ x ˜ I T P x ˜ I 0 .
Therefore, x ˜ I L and p ˜ L . Integrating Equation (21), it can be proven that x ˜ I L 2 . From the expression (17), it is proven that x ˜ ˙ I L . Then, by Barbalat’s lemma, we conclude that x ˜ I ( t ) 0 with t , thus, achieving the control objective.

4.2. Control Law without Measurement of x ˙ I

The following proposed adaptive control law does not require measurement of the speed of image features:
u c = ρ 1 ρ 2 0 ρ 2 ρ 1 0 0 0 ρ 3 p ^ 1 p ^ 2 p ^ 3 = ϕ c p ^ ,
where ϕ c is composed with the elements of the vector ρ given by the expression (16).

4.2.1. Controller Analysis

Assuming perfect velocity tracking ( u = u c ), from the expressions (12) and (22), the closed-loop equation of the system is obtained:
D 1 x ˙ I = ϕ c p ^ = ϕ c ( p + p ˜ ) = D 1 ρ + ϕ c p ˜
D 1 ( x ˜ ˙ I + λ x ˜ I ) = ϕ c p ˜ .
Then, the following Lyapunov candidate function is proposed:
V = x ˜ I T D 1 x ˜ I + 1 2 p ˜ T γ p ˜ ,
whose time derivative is:
V ˙ = x ˜ I T D 1 x ˜ ˙ I + x ˜ ˙ I T D 1 x ˜ I + p ˜ T γ p ˜ ˙ = 2 λ x ˜ I T D 1 x ˜ I + α 2 x ˜ I T D 2 T ϕ c p ˜ ,
where the expression (20) is used as the adapting law. From (25), outside the ball
x ˜ I 2 a | x ˙ d m a x I | λ p ˜ 1 2 a p ˜ = η ,
we verified that V ˙ < 0 . Then, the control error is finally bounded by the value of η .

4.2.2. Remarks

  • For positioning | x ˙ d m a x I | = 0 ; therefore, x ˜ I ( t ) 0 with t
  • For trajectories that are persistently exciting, it can be shown that p ˜ ( t ) 0 with t , and therefore x ˜ I ( t ) 0 with t .
Then, under these conditions, we proved the same as for the controller with measurement of x ˙ I , that x ˜ I ( t ) 0 with t , thus, achieving the control goal.

5. Dynamic Compensation Design

This section ignores the perfect velocity tracking assumption considering a velocity tracking error ( u = u c + u ˜ ) due to the dynamics of the robot. Under this condition, the closed-loop Equation (23) now results in:
D 1 ( x ˜ ˙ I + λ x ˜ I ) = ϕ c p ˜ + u ˜ ,
and the time derivative of (24) is:
V ˙ = λ x ˜ I T D 1 x ˜ I + ( 1 + 1 α ) x ˜ I T u ˜ + 1 α x ˜ I T D 2 ϕ c p ˜ ,
From (28), outside the ball,
x ˜ I 1 λ | x ˙ d m a x I | p ˜ + 1.5 u ˜ p 1 p ˜ = η ,
it is verified that V ˙ < 0 . Then, the control error is finally bounded by the value of η . Note that u ˜ ( t ) does not necessarily converge to zero, since, by including the robot dynamics, the convergence of p ˜ ( t ) 0 is not always achieved as an attempt is made to identify a different structure for which the kinematic controller was designed. As a consequence, the control error increases.
To solve the degradation of the kinematic control, a cascaded adaptive dynamic controller is proposed that makes the robot reach the reference speed provided by the kinematic controller, again restoring the good performance of the control system, see Figure 2. Defining the speed control error as q ˜ ˙ = q ˙ q ˙ d , the following control law is proposed:
q ˙ r e f = M ^ ν + C ^ q ˙ d = ϕ d X ^ i n e r c ,
where ν = q ¨ d K q ˜ ˙ , K is a positive definite gain matrix, X ^ i n e r c represents the estimated robot parameters, X ˜ i n e r c = X ^ i n e r c X i n e r c is the parameter error vector, and M ^ and C ^ are the matrices of inertia and Coriolis torques calculated with the estimated parameters. Replacing q ˙ r e f in the dynamic model (3), we obtain the closed-loop equation of the system:
M q ¨ + C q ˙ = ϕ d X ^ i n e r c = ϕ d X i n e r c + ϕ d X ˜ i n e r c
M ( q ˜ ¨ + K q ˜ ˙ ) + C q ˜ ˙ = ϕ d X ˜ i n e r c .
We consider the following positive definite function:
V = 1 2 q ˜ ˙ T M q ˜ ˙ + 1 2 X ˜ i n e r c T γ d y n X ˜ i n e r c ,
and its time derivative in the trajectories of the system:
V ˙ = q ˜ ˙ T M q ˜ ¨ + 1 2 q ˜ ˙ T M ˙ q ˜ ˙ + X ˜ i n e r c T γ d y n X ˜ ˙ i n e r c = K q ˜ ˙ T M q ˜ ˙ + q ˜ ˙ T ( M ˙ 2 C ) q ˜ ˙ + X ˜ i n e r c T ( ϕ d q ˜ ˙ + γ d y n X ˜ ˙ i n e r c ) ,
where the term ( M ˙ 2 C ) is zero, since C is the antisymmetric matrix calculated with the Christoffel terms. Defining, as adaptation law,
X ˜ ˙ i n e r c = γ d y n 1 ϕ d T q ˜ ˙ ,
and replacing it in expression (33), we obtain:
V ˙ = K q ˜ ˙ T M q ˜ ˙ 0 ,
and therefore q ˜ ˙ and X ˜ i n e r c L . Furthermore, by integrating V ˙ over [ 0 , T ] , it can be shown that q ˜ ˙ L 2 . From expression (31), it is proven that q ˜ ¨ L . Then, by Barbalat’s lemma, we conclude that q ˜ ˙ ( t ) 0 with t , thus, achieving the control objective.
As proven above, the result q ˜ ˙ ( t ) 0 with t implies that:
u ˜ ( t ) = R i d e a l J ( q , t w 2 w , d x w ) q ˜ ˙ ( t ) 0 with t
Then, going back to Equation (29) and introducing the convergence condition on u ˜ (36), the error bound conditions of Equation (26) and, therefore, the stability conditions previously obtained for the kinematic controller are asymptotically recovered even in the presence of unknown robot dynamics.

6. Simulations

In this section, we show simulation experiments that can be considered realistic and whose results are very close to those that would be obtained with the real robot. This is because the model used in these simulations is an identified model of the real robot, which represents the dynamics of both the rigid body and that of the electric motors and reduction gears in its actuators. A complete study of this model, the reduction of the total set of dynamic parameters to a minimum base parameters set that considers only the dominant and identifiable dynamics, and its subsequent expansion to model and identify the dynamics of its actuators is found in [18].
To verify the performance of the proposed control system, realistic simulations were carried out for a positioning task and for a trajectory following task using the identified kinematic and dynamic model corresponding to the Bosch SR800 SCARA industrial robot. The parameters of the vision system were θ = 10 and α = 820 px/mm, and errors of 20 % and 10 % were considered, respectively.
Figure 3 shows the evolution of the image characteristics for a positioning task, starting from rest at the position x w ( 0 ) = [ 0.63 , 0.22 , 0.35 ] T [ m ] and reaching the desired position x d w = [ 0.31 , 0.61 , 0.45 ] T [ m ] . The kinematic controller was applied first to the kinematic model of the robot, and then its dynamics were incorporated. The gains set to the values shown in Table 1. Figure 4 and Figure 5 show the norm of the control error and the convergence of the vision system parameters; observing that, in both cases, the control error converged close to zero as indicated in the remarks of Section 4.
On the other hand, Figure 6 shows the image feature vector x I for the following task, starting from the position x w ( 0 ) = [ 0.523 , 0.278 , 0.423 ] T [ m ] and following a circular spiral reference x d w = [ 0.15 cos 24 t cos t + 0.540 , 0.15 sin 24 t cos t 0.046 , 0.1 sin t + 0.423 ] T [ m ] . The servo-visual controller was applied to a kinematically modeled robot, then the robot dynamics were incorporated, and later this dynamic was compensated with the adaptive controller, considering a 50 % error in the robot parameters. The gains used in these three cases are shown in Table 2. Figure 7 and Figure 8 show the norm of the x ˜ I vector and the convergence of the vision system parameters.
Figure 9 and Figure 10 show the norm of the speed control error of the adaptive dynamic controller and the convergence of the dynamic parameters of the robot, respectively.

7. Discussion

In Section 4, the design of two new adaptive servo-visual kinematic controllers for trajectory tracking with a 3-DOF manipulator robot was presented. The main contribution of these new control schemes is their simple design, their image-based control law, and their unique adaptation law, unlike previous works with complex controllers as in [15,16]. From a practical point of view, these do not require the measurement of speed in the image plane.
Finally, these schemes represent a generalization of previous work [17] to the case of 3-D movement by the manipulator. This makes it possible to consider not only the intrinsic and extrinsic parameters of the vision system as unknown but also the depth of the objective, which can be time-variant and, as shown in Section 3, can be estimated with an appropriate selection of the image characteristics.
In the simulations, the tracking task and the target points in the positioning task were chosen to show, in both cases, the performance of the controller without speed measurement in the case where the depth of the target is time-variant. It was also demonstrated by Lyapunov’s theory that the scheme that required a speed measurement of the image characteristics achieved an asymptotic convergence of the control errors in both the positioning and tracking tasks.
On the other hand, the scheme that did not require such measurement achieved an asymptotic convergence only in positioning tasks as shown by the simulation results of Figure 4 and in following tasks in which the trajectories to follow sufficiently excited the dynamics of the system with the spiral trajectory whose result is shown in Figure 7. However, in this last scheme, the stability analysis showed that, even in trajectories that were not persistently exciting, the control errors always remained bounded. In a previous work [17], it was shown that, for the case of 2-D motion where the depth of the target was constant, the controllers always reached the control target even on non-exciting trajectories, such as a ramp type.
Figure 5 shows that the control actions generated by the kinematic controller, regardless of whether they applied to an idealized robot modeled only with its kinematics or to a real robot modeled with its exact dynamic model, the estimated parameters always converged on the positioning tasks, although not to the true values. This shows that, for these tasks, the performance of the kinematic controller is sufficient, and dynamics compensation is not required.
However, in high-speed tracking tasks that excite the manipulator dynamics, such as those in Figure 6, it can be seen in Figure 7 how the control error in the image plane converged asymptotically to zero when the control actions generated by the kinematic controller were applied to an idealized robot modeled only with its kinematics, and the estimated parameters converged to their true values as shown in Figure 8. However, when these actions were applied to a real robot modeled with its exact dynamic model, the performance was very poor since it was attempting to control a system with a different structure than that for which the controller was designed, generating undesirable high frequency movements, like those shown in Figure 8 and Figure 9, and even the estimated parameters may not converge as can be seen in Figure 8.
Figure 7 and Figure 9 show that the performance of the kinematic controller is practically recovered when the manipulator dynamics were compensated, limiting the control errors as indicated by the stability test of the dynamic compensator in Section 5, even with unknown manipulator parameters. Figure 10 shows that most of the parameters converged to their true value and others remained bounded very close to them. Furthermore, Figure 8 shows how the convergence of the vision system parameters was also recovered, although not to the true values as in the ideal kinematic situation.

8. Conclusions

An adaptive 3-D kinematic servo visual controller for positioning and trajectory tracking was designed for a Scara robot manipulator and its stability based on Lyapunov’s theory was proven. Simulation experiments showed that, for positioning, the control objective was always reached regardless of the manipulator dynamics. On the other hand, for following tasks with generally unknown robot dynamics, we observed that the kinematic control kept errors limited; however, its performance degraded with its aplication to an ideal robot without dynamics.
However, the cascaded adaptive dynamic controller efficiently compensated for the unknown dynamics of the manipulator, and the final performance approximated that of the kinematic control even though the estimated robot parameters did not converge to their true values as shown by the simulation results and the stability proof based in Lyapunov. Work is currently underway to fine-tune an experimental system in order to take this research to the experimentation phase.

Author Contributions

Conceptualization, F.R. and R.C.; Formal analysis, J.A.S., F.R., and R.C.; Methodology, J.A.S.; Software, J.A.S.; Supervision, F.R. and R.C.; Writing—original draft, J.A.S.; Writing—review and editing, F.R. and R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors thank the National University of San Juan (UNSJ) and CONICET for the support and infrastructure provided to carry out this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weiss, L.; Sanderson, A.; Neuman, C. Dynamic sensor-based control of robots with visual feedback. IEEE J. Robot. Autom. 1987, 3, 404–417. [Google Scholar] [CrossRef]
  2. Lefeber, E.; Kelly, R.; Ortega, R.; Nijmeijer, H. Adaptive and Filtered Visual Servoing of Planar Robots. IFAC Proc. Vol. 1998, 31, 541–546. [Google Scholar] [CrossRef]
  3. Chaumette, F.; Rives, P.; Espiau, B. Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing. In Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9–11 April 1991; Volume 3, pp. 2248–2253. [Google Scholar]
  4. Ruf, A.; Tonko, M.; Horaud, R.; Nagel, H. Visual tracking of an end-effector by adaptive kinematic prediction. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems, Innovative Robotics for Real-World Applications, IROS ’97, Grenoble, France, 11–11 September 1997; Volume 2, pp. 893–899. [Google Scholar]
  5. Nasisi, O.; Carelli, R. Adaptive servo visual robot control. Robot. Auton. Syst. 2003, 43, 51–78. [Google Scholar] [CrossRef]
  6. Astolfi, A.; Hsu, L.; Netto, M.; Ortega, R. A solution to the adaptive visual servoing problem. In Proceedings of the 2001 ICRA, IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Korea, 21–26 May 2001; Volume 1, pp. 743–748. [Google Scholar]
  7. Astolfi, A.; Hsu, L.; Netto, M.S.; Ortega, R. Two solutions to the adaptive visual servoing problem. IEEE Trans. Robot. Autom. 2002, 18, 387–392. [Google Scholar] [CrossRef]
  8. Nuño, E.; Ortega, R. New solutions to the 2D adaptive visual servoing problem with relaxed excitation requirements. Int. J. Adapt. Control Signal Process. 2019, 33, 1843–1856. [Google Scholar]
  9. Hsu, L.; Aquino, P.L.S. Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera. In Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304), Phoenix, AZ, USA, 7–10 December 1999; Volume 2, pp. 1248–1253. [Google Scholar]
  10. Lizarralde, F.; Hsu, L.; Costa, R.R. Adaptive Visual Servoing of Robot Manipulators without Measuring the Image Velocity. IFAC Proc. Vol. 2008, 41, 4108–4113. [Google Scholar] [CrossRef] [Green Version]
  11. Sahin, T.; Zergeroglu, E. Adaptive visual servo control of robot manipulators via composite camera inputs. In Proceedings of the Fifth International Workshop on Robot Motion and Control, RoMoCo’05, Poznań, Poland, 23–25 June 2005; pp. 219–224. [Google Scholar]
  12. Zachi, A.R.L.; Liu, H.; Lizarralde, F.; Leite, A.C. Adaptive control of nonlinear visual servoing systems for 3D cartesian tracking. Sba Controle Automação Soc. Bras. Autom. 2006, 17, 381–390. [Google Scholar] [CrossRef]
  13. Behzadikhormouji, H.; Derhami, V.; Rezaeian, M. Adaptive Visual Servoing Control of robot Manipulator for Trajectory Tracking tasks in 3D Space. In Proceedings of the 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 25–27 October 2017; pp. 376–382. [Google Scholar] [CrossRef]
  14. Wang, S.; Zhang, K.; Herrmann, G. An Adaptive Controller for Robotic Manipulators with Unknown Kinematics and Dynamics. IFAC-PapersOnLine 2020, 53, 8796–8801. [Google Scholar] [CrossRef]
  15. Liang, X.; Wang, H.; Liu, Y.; Chen, W. Adaptive visual tracking control of uncertain rigid-link electrically driven robotic manipulators with an uncalibrated fixed camera. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 1627–1632. [Google Scholar] [CrossRef]
  16. Wang, H. Adaptive visual tracking for robotic systems without image-space velocity measurement. Automatica 2015, 55, 294–301. [Google Scholar] [CrossRef] [Green Version]
  17. Sarapura, J.A.; Roberti, F.; Gimenez, J.; Patiño, D.; Carelli, R. Adaptive Visual Servoing Control of a Manipulator with Uncertainties in Vision and Dynamics. In Proceedings of the 2018 Argentine Conference on Automatic Control (AADECA), Buenos Aires, Argentina, 7–9 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
  18. Sarapura, J.A. Control Servo Visual Estéreo de un Robot Manipulador. Master’s Thesis, Universidad Nacional de San Juan, Facultad de Ingeniería, Instituto de Automática, San Juan, Argentina, 2013. [Google Scholar]
  19. Slawiñski, E.; Postigo, J.F.; Mut, V.; Carestía, D.; Castro, F. Estructura abierta de software para un robot industrial. Rev. Iberoam. Autom. Inform. Ind. RIAI 2007, 4, 86–95. [Google Scholar] [CrossRef] [Green Version]
Figure 1. System frames of reference.
Figure 1. System frames of reference.
Automation 02 00008 g001
Figure 2. Adaptive visual servoing control system with dynamic compensation.
Figure 2. Adaptive visual servoing control system with dynamic compensation.
Automation 02 00008 g002
Figure 3. The image features for positioning.
Figure 3. The image features for positioning.
Automation 02 00008 g003
Figure 4. The control error for positioning.
Figure 4. The control error for positioning.
Automation 02 00008 g004
Figure 5. The estimated vision parameters for positioning.
Figure 5. The estimated vision parameters for positioning.
Automation 02 00008 g005
Figure 6. The image features for the following task.
Figure 6. The image features for the following task.
Automation 02 00008 g006
Figure 7. The control error for the following task.
Figure 7. The control error for the following task.
Automation 02 00008 g007
Figure 8. The vision parameters estimated for the following task.
Figure 8. The vision parameters estimated for the following task.
Automation 02 00008 g008
Figure 9. The speed error for the tracking task.
Figure 9. The speed error for the tracking task.
Automation 02 00008 g009
Figure 10. The estimated robot parameters for the tracking task.
Figure 10. The estimated robot parameters for the tracking task.
Automation 02 00008 g010
Table 1. The gains for the positioning task.
Table 1. The gains for the positioning task.
Kinematic ModelDynamic Model
λ = 10 0 0 0 10 0 0 0 10 λ = 15 0 0 0 15 0 0 0 0.0004
γ = 10 5 800 0 0 0 0.02 0 0 0 0.08 γ = 2 10 10 0 0 0 10 3 0 0 0 1
Table 2. The gains for the following task.
Table 2. The gains for the following task.
Kinematic ModelDynamic ModelCompensation Dynamic Model
λ = 200 0 0 0 400 0 0 0 20 λ = 100 0 0 0 300 0 0 0 10 λ = 60 0 0 0 610 0 0 0 20
γ = 10 5 200 0 0 0 200 0 0 0 0.008 γ = 10 9 100 0 0 0 10 0 0 0 0.002 γ = 10 5 400 0 0 0 400 0 0 0 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sarapura, J.A.; Roberti, F.; Carelli, R. Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters. Automation 2021, 2, 127-140. https://doi.org/10.3390/automation2030008

AMA Style

Sarapura JA, Roberti F, Carelli R. Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters. Automation. 2021; 2(3):127-140. https://doi.org/10.3390/automation2030008

Chicago/Turabian Style

Sarapura, Jorge Antonio, Flavio Roberti, and Ricardo Carelli. 2021. "Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters" Automation 2, no. 3: 127-140. https://doi.org/10.3390/automation2030008

APA Style

Sarapura, J. A., Roberti, F., & Carelli, R. (2021). Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters. Automation, 2(3), 127-140. https://doi.org/10.3390/automation2030008

Article Metrics

Back to TopTop