^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.

Multi-fingered robotic hands allow both the execution of robust grasping tasks and dexterous manipulation. These are two of the skills that human beings have: dexterity and anthropomorphism [

Tactile sensors share a common property in robotics: they analyze the direct contact between the robot and the objects of the environment in order to adapt the robot’s reaction to the manipulated object. Tactile information is processed according to two different aims: object identification and manipulation control. On the one hand, the properties of the objects extracted from the robot’s tactile sensors can be used to categorize the objects into different classes. On the other hand, the measurements obtained from the tactile sensors can also be applied to control the interaction force [

Several approaches to solve the motion planning associated with dexterous manipulation problem have been proposed during the last decade. They have been mainly focused in three specific lines: graph representations [

Visual servo control techniques [

In addition, the proposed control approach is based on an optimal control framework. This approach is used to visual servo control a robotic hand during a manipulation task taking into account the hand dynamics. From that framework, several new controllers are derived, which offers a useful unification methodology for direct control any robotic manipulation system using visual and tactile sensors feedback, an approach which has not been implemented in previous research projects. The proposed approach considers the optimization of the motor signals or torques sent to the robotic hand during visual control tasks. Moreover, the general method presented in the paper is based on the formulation of the tracking problem in terms of constraints, which was suggested in [

Summarizing, the proposed optimal control framework, from which several new dynamic controllers can be derived, is considered as the main contribution of this paper in comparison with other previous approaches for dexterous manipulation [

The paper is organized as follows: Section 2 describes the system architecture of the robotic manipulation system. Afterwards, the kinematics and dynamics formulation of the overall robotic system are described in Section 3. Section 4 explains the theoretical concepts about the employed optimal controller. In Section 5, the image trajectory to be tracked by the robotic manipulation system is described as a task constrain. In Section 6, the general dynamic visual servoing framework and the required modifications to perform a hybrid force control are presented. In Section 7, some new controllers are derived from the optimal control framework. Section 8 describes the experimental results illustrating the proposed controllers. The final section reports some important conclusions.

The robotic manipulation system is composed of the Allegro robotic hand (SimLab Co., Seoul, Korea) (see ^{2}, so the forces are obtained. The mean of these forces is considered as the force applied by the fingertip. Moreover, the contact points are supposed to be in the tactel of maximum pressure exerted.

_{0}, v_{0}) = (298, 225) px, and (f_{u}, f_{v}) = (1,082.3, 1,073.7) px (position of the optical center (u_{0}, v_{0}) and the focal length in the x and y directions, respectively). The manipulated object has four marks which will be the extracted visual features.

We consider the robotic hand as a set of k fingers with three degrees of freedom. Each finger holds an object considering contact points with friction and without slippage. In order to firmly grasp and manipulate the object, the grasp is considered to be an active form closure. Thus, each fingertip i is exerting a fingertip force _{Ci} ∈ ℜ^{3} within the friction cone at the contact point. The grasping constrain between the robot and the object is done by the grasp matrix
_{C} = [_{Cl}^{T} … _{Ck}^{T}] ^{T} at the fingertips to the resultant force and moment _{o} ∈ ℜ^{6} on the object:
_{C} and _{o} are both expressed in the object coordinate frame S_{0} fixed to the object mass center. This equation derives in the kinematics relation between velocity of the object _{o} ∈ ℜ^{6} and velocity of the contact point _{Ci} ∈ ℜ^{6}:
_{o} denotes the position and orientation of the object in the contact point from S_{0}. Extending _{C} = [_{Cl}^{T} … _{Ck}^{T}] ^{T} is the vector which contains all the contact points velocities.

As _{1}_{x}_{1}_{y}_{2}_{x}_{2}_{y}_{nx}_{ny}^{T} ∈ ℜ^{2n} defines the image coordinates of the extracted features. From the interaction matrix _{s}(

The finger Jacobian denoted by _{Hi} ∈ ℜ^{3×3} relates the joint velocities of the i^{th} finger (^{3}) with the fingertip velocities (_{Fi} ∈ ℜ^{3}) referenced in camera coordinate frame S_{C}:

This finger Jacobian can be easily obtained from the typical robot Jacobian matrix and applying the known mapping from robot coordinate frame and camera coordinate frame S_{C}. Extending _{1}^{T} … _{k}^{T}]^{T} represents the joint velocities of the robot hand and _{H} = diag[_{H1} … _{HK}] is the robot hand Jacobian which relates joint velocities and fingertip velocities measured from camera coordinate frame S_{C}. If there is no slippage between the fingertips and the object, it can be considered that _{C} = _{F}. Applying these equalities in _{C} with the finger joint velocities _{s}^{+} is the pseudo-inverse of the interaction matrix. From this equation, it can be obtained the joint velocities depending on the image features rate of change:
_{T} is the Jacobian matrix mapping from joint space to image space in this robotic manipulation system.

The dynamic model of the manipulation system can be divided into the dynamics description both the grasped object and the multi-fingered hand with the contact forces constrain. In this subsection, both dynamics equations will be given.

The motion equation for the object based on the simple case of moving it in free space without any external force, can be described as:
_{o} ∈ ℜ^{6×6} is the inertia matrix of the object, _{o} ∈ ℜ^{6} is the centrifugal and Coriolis vector, _{o} ∈ ℜ^{6} is the gravitational force and _{o} ∈ ℜ^{6} is the resultant force applied by the fingers. The variable _{s} ∈ ℜ^{6} is the desired object acceleration.

With regard to the multi-fingered hand, we can assume its dynamics as the set of serial three-link rigid mechanism which correspond to the fingers. In this case, the dynamics equation of finger i can be described as:
_{i} ∈ ℜ^{3×1}, _{i} ∈ ℜ^{3×1} and _{i} ∈ ℜ^{3×1} are the vectors of generalized joint coordinates, joint velocities and joint accelerations of the finger i. Moreover, _{Fi} ∈ ℜ^{3×3} is the symmetric positive definite finger inertia matrix, _{Fi} ∈ ℜ^{3×1} and _{Fi} ∈ ℜ^{3×1} both denote the vector of centripetal and Coriolis forces and the gravitational force of the finger i, respectively. In addition, _{i} ∈ ℜ^{3×1} represents the applied motor commands (_{Ci} ∈ ℜ^{3×1} is the contact forces exerted by the finger i at its contact point. Combining _{H} = diag[_{F1}…_{Fk}], _{H} = col[_{F1}…_{Fk}], _{H} = col[_{F1}…_{Fk}], _{1}…_{k}], _{C} = col[_{C1}…_{Ck}] are the composition of the matrices and vector for the whole system. The term

As stated, ^{3k×1} represents the applied motor commands at the joints’ fingers. In order to simplify this equation, we can write the robot hand dynamics as follows:
_{cg} = −_{H} − _{H}.

The dynamic model of a serial-link robot has been used in different approaches to control a robotic system for tracking [

Basically, the control approach suggested by [

This equation may contain holonomic and/or non-holonomic constraints, and represents the task for the robotic system to be described in form of m constraints description. Differentiating these constraints with respect to time (assuming that ^{m×3k} and ^{m×1} are both matrix and vector obtained by differentiating the set of relations which satisfy the constrains represented by _{H} is the inertia matrix of the robotic system, in this case of the multi-fingered robot hand, and the symbol + denotes the pseudo-inverse for a general matrix. As it can be seen in

Although the optimal control is based on the robotic hand dynamic model of

The main objective of the robotic manipulation system proposed is to control the grasping force to a desired value such that the friction condition is satisfied besides controlling the position of the object along a desired trajectory in the image space. Therefore, the task description as constraint is given by the following equation in the image space:
_{d}, _{d} and _{d} are the desired image space accelerations, velocities and positions, respectively. _{P} and _{D} are proportional and derivative gain matrices, respectively. This equation can be expressed with regard to image error in the following way:
_{s} and _{s} are the image error and the time derivative of the error respectively. The variable _{r} denotes the reference image accelerations of our image space based controller. This reference control is related with joint accelerations by differentiating to the time _{T}^{+} is the pseudo-inverse of the Jacobian _{T}. _{T} is continuously differentiable with respect to joint coordinates

From this equation, it can be possible to express the image tracking or task in the form of the constraints description of

With this definition of _{s} is the object acceleration imposed by the reference controller. By differentiating with respect to the time _{r}:

Solving the motion acceleration of the object from

This section describes a new optimal control framework for a robotic manipulation system using direct visual servoing. This control framework is based on the task description in the image space explained in the section above.

As stated, the control function that minimizes the motor signals of a robot hand while performing the task described in

As it can be seen, the control law _{T}^{+}_{H}^{−1}_{T}^{−1/2})^{+} and consequently, the control law. Different control laws will be presented in the next section with different values of

In order to demonstrate the stability of the control law, the closed loop is computed from

This Equation can be simplified by pre-multiplying its left and right side by the term (_{T}^{+}_{H}^{−1}_{T}^{−1/2})^{1/2}:

Using

Therefore, when _{T}^{+}is full rank, an asymptotic tracking is achieved. This way, the convergence of the visual servo control law is demonstrated.

The manipulation of an object by a multi-fingered robot hand with fixed contact points alters the robot’s dynamics with a generalized contact force

In order to incorporate a control which in practice keeps constant the contact forces without affecting the task achievement or image tracking, a modification of the control law defined in ^{−1/2}(_{T}^{+}_{H}^{−1}^{−1/2})^{+}. Therefore, the term expressed as [^{−1/2}(_{T}^{+}_{H}^{−1}^{−1/2})^{+} (_{T}^{+}_{H}^{−1}^{−1/2})^{1/2}] · _{1} can be used to project the motor command _{1} onto the null space of the dynamic visual servo task. The fulfillment of the task is not affected by the choice of the control law _{1}. Setting _{1} = _{H} + _{H} + _{0}, the Coriolis, centrifugal and gravitational forces can be compensated. Using this modification, the control law yields as follows:
_{0} works in the null space of the task defined by (_{T}^{+}_{H}^{−1}^{−1/2})^{+}. In this paper, the term _{0} represents an additional component of the final controller which is used for contact force stabilization. For this reason, this term is defined as follows:
_{d} are the desired exerted contact forces of the fingertips which act in the null-space. Therefore, both the constraint imposed by the image tracking and the contact force can be set independently and accomplished with the control law presented in

One of the main contributions of the proposed framework is the possibility to generate different controllers to perform robotic manipulation tasks taking into account the robot dynamics and using visual-tactile information. Up to now, the control law has been written depending on the weighting matrix

Considering

Simplifying this equation, the final control law gives the following expression:

As it can be seen in the _{T}, a matrix with only kinematic contain.

An important value for the control law due to its physical interpretation is

Applying the pseudo inverse as Q^{+} = Q^{T}(Q˙Q^{T})^{−1} and simplifying this equation, the control law yields:

In this subsection, a new value of

Simplifying the equation as before, the control law yields:

In the previous section, three visual controllers have been obtained using the proposed control framework. In this section, different results are described in order to evaluate the obtained controllers during manipulation tasks using the system architecture presented in Section 2. To do this, four visual features are extracted from the grasped object by using the eye-to-hand camera system. It is assumed that all the visual features are visible during the manipulation experiment. As stated, only three fingers of the robot hand and its three last degrees of freedom are employed in the manipulation task.

In this section, the three controllers derived in Section 7 are evaluated in the joint and image space. The presented experiments consist of a manipulation task where the extracted image features must track the desired image trajectory represented in ^{−2}, ^{−2} and ^{−1}, respectively. As this last figure shows, the tracking is correctly developed in the image space and the image error remains low (

In order to evaluate the behavior in the joint space during these manipulation experiments, the obtained torques are represented in ^{−2}, ^{−2} and ^{−1} are indicated. When ^{−2}, the value of ^{−2} (in red) lower torques in the first joints are obtained. Therefore, this diagonal matrix can be employed to distribute the torques and to diminish the effort in the desired joints. When ^{−1} a correct image tracking is also observed (see

The desired contact forces for the fingertips are regulated to 12 N during the experiment. ^{−2}, ^{−2} and ^{−1} are 0.87 N, 1.01 N and 0.91 N respectively.

In order to show the 3D behavior of the proposed controllers, this section presents a manipulation task where the grasped object must perform a rotation while the robot is doing a displacement. The image trajectories described by the four extracted marks using the three controllers are represented in ^{−2}. The 3D trajectories described by the manipulated object are shown in ^{−2}, ^{−2} and ^{−1} are indicated in green, red and orange respectively. As it is shown in ^{−2}.

Finally,

This paper presents a novel optimal control framework which allows defining new dynamic visual controllers in order to carry out the dexterous manipulation of a robotic manipulation system. Making use of both the dynamics of the robotic hand and the definition of the image trajectory as task description of the object motion, different image-based dynamic visual servoing systems are defined to dexterous manipulation with multi-fingered robotic hands. Force fingertip control has been proposed as an additional control command which acts in the null-space of the task which manages the object tracking. This way, the desired contact force can be set independently and accomplished with any of the control laws derived from optimal control framework. For that end, a set of tactile sensors has been used in the real experiments in order to verify the proposed control law.

The approach has been successfully verified with the implementation of some derived controllers on a real robotic manipulation system. As shown, the behavior in task space is very similar and the image error remains low using different values of the weighting matrix. The fingertip interaction force is also regulated and low error is obtained during the manipulation tasks.

This work was funded by the Spanish Ministry of Economy, the European FEDER funds and the Valencia Regional Government, through the research projects DPI2012-32390 and PROMETEO/2013/085.

The authors declare no conflict of interest.

(

Experimental setup for dexterous manipulation (robotic hand, manipulated object, eye-to-hand camera and reference systems).

Image trajectories obtained during the first set of experiments. (^{−2}. (^{−2}. (^{−1}.

Torques obtained during the first set of experiments. (

Mean total contact force during the first set of experiments. (^{−2}. (^{−2}. (^{−1}.

Distribution of the pressure measurements registered by the arrays of tactile sensors of the three fingers in an intermediate iteration during the manipulation task (first task).

Image trajectories obtained during the second set of experiments. (^{−2}. (^{−2}. (^{−1}.

3D trajectories of the manipulated object obtained during the second set of experiments.

Mean total contact force during the second set of experiments. (^{−2}. (^{−2}. (c) ^{−1}.

Distribution of the pressure measurements registered by the arrays of tactile sensors of the three fingers in an intermediate iteration during the manipulation task (second task).

Mean image and force contact error during the first set of experiments.

^{−2} |
2.487 | 0.87 |

^{−2} |
2.621 | 1.01 |

^{−1} |
4.010 | 0.91 |