^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/)

In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.

Visual sensors integrated with robotic manipulators can be increasingly beneficial for robotic perception and behavioral flexibility in unstructured environments [

Vision-based robotic manipulation depends mainly on visual information feedback to control the positioning or motioning of a manipulator [

The PBVS vision usage provides information to regulate the end-effector pose (relative to the object in the Cartesian space). Owing to its characteristic of global asymptotic stability, this method is suitable for most industrial robotic manipulators [

In IBVS, there is direct control of the feature points on the image plane for robotic manipulation, and the image Jacobian matrix is used to describe the differential relation between the image error and the end-effector pose [

With respect to modified methods, the presented image Jacobian matrix requires depth information and camera calibration parameters (as in traditional IBVS methodologies); such requirements inevitably lead to task singularities, thus making it difficult to ensure the stabilized convergence of a desired target [

One category comprises of online estimation techniques (e.g., the famous Broyden-based method and its modified variants) [

Another solution to this estimation problem involves machine learning techniques [

In this paper, the discussion will focus on non-parameter Jacobian matrix estimation problems. A new global-state-space IBVS scheme, which associates KF and ENN learning techniques, is proposed for uncalibrated model-independent robotic manipulation that has robust stability in global-state-space, where the image features are constrained on the camera field-of-view (FOV). The proposed scheme is different from traditional PBVS methods; IBVS methods possess the merits of not needing calibration of either camera parameters or the robotic kinematics model. Moreover, for IBVS methods, Jacobian dynamic estimation does not require depth information. The main contributions of the paper are as follows:

Jacobian online identification problems were solved by introducing state-space infrastructure, which has been incorporated into robust KF techniques. The traditional KF is a minimum-variance state estimator for linear dynamic systems with Gaussian white noises sequence. In most practices, however, the observation noise is compound and the noises are statistically correlated noises (as to being simple white noise sequences). Therefore, we have derived a robust KF estimator with multiple orders of noise for the Jacobian online estimation task.

The KF is sensitive with respect to the initial robotic state and the initial noises' statistical characteristics (

In the online testing phase, the precise positioning problem is solved by using a robust KF to improve ENN learning so as to achieve robotic convergence to the desired pose. After, the ENN's weights are updated through re-training using a new input-output data pair vector, which is obtained from the KF cycle to ensure robotic global stability manipulation. Finally, we have designed a novel global-state-space IBVS framework associated with robust KF cooperated ENN learning. In our finding, the image Jacobian matrix is estimated without accounting for camera calibration and modeling error. Our servoing system performs robustly despite outside noises and system destabilization.

The paper is organized as follows: a general description of visual servoing (VS) for uncalibrated model-independent robotic manipulation is presented in the next section. The Jacobian estimation with KF is presented in the Section 3; in that section, we have also derived a robust KF with statistically correlated noises. The ENN for global Jacobian learning and a novel global-state-space IBVS scheme are presented in Section 4. The simulation and experimental results are discussed in Section 5. Section 6 presents our conclusions.

In this section, we assume that we have a robotic manipulator, ^{m}^{*}(^{*}(^{m}^{n}

The task error ^{χ}e^{n}^{χ}P_{E}^{χ}P_{T}

The ^{I}^{I}ħ_{E}^{I}ħ_{T}^{χ}P_{E}^{χ}P_{T}^{I}^{χ}e

The association of ^{I}^{m}^{χ}_{E}^{n}

Robot attached visual sensors can be considered an instrumental system whose state vector is formed by concatenations of the row and column elements of the image Jacobian matrix

The robotic state-space model is a linear discrete-time dynamical system according to:
^{m}^{×}^{n}^{m}^{×n} is the model noise with zero mean, and variances is ^{m}^{×1} is the observation noise vector, ^{m}^{×1} is the output vector, given by:

Thus, the observation matrix

The Kalman-Bucy filter (KBF) [_{(}_{k}_{)}) may be compound, and statistically correlated with model noises (_{(}_{k}_{)}), rather than being simple white noise. In the next section, we provide a robust Kalman filtering with statistically correlated noises for image Jacobian estimation task.

For the universal environment, the observation noise vector (_{(}_{k}_{)}) that meets the Markov chain model, given by:
_{(}_{k}_{− 1)} is the coefficient matrix, _{(}_{k}_{)} is the Gaussian white noise sequence with zero mean, and the variance is _{(}_{k}_{)}.

Based upon

_{(}_{k}_{)} and ^{*}_{(}_{k}_{)} are as follows:

^{*}_{(}_{k}_{)}) is a Gaussian white noise sequence with zero mean, the variance is ^{*}_{(}_{k}_{)}. ^{*}_{(}_{k}_{)} is statistically correlated with the model noise _{(}_{k}_{)}, and the cross-covariance matrix is _{(}_{k}_{)}. In this case, the application of traditional KF methods was limited. To solve this problem, we introduced a filtering-revise-vector _{(}_{k}_{)}, such that state

In order to eliminate the relevance between ^{*}(^{*}(

Thus, the filtering-revised-vector is:

Estimation equations: the one-step state estimation equation and the variance matrix of estimation error are given by:

Updating equations: the state filtering equation, filtering gain, and variance matrix for filtering error are as follows:

As shown in

In this section, a novel global-state-space IBVS method is proposed, which attaches the robust KF Cooperate with Elman neural network learning techniques, so as to enable robot stability manipulation in global-state-space.

The original Elman neural network (ENN) was proposed by Elman in 1990 [^{Li}_{I}_{j}_{k}

Input layers:

Hidden layers:

Output layers:

The output layer activation function

The learning algorithm is applied to determine the neural network structure. The goal of training is to obtain the convergence of the connection weights ^{Li}_{1}(_{2}(_{600}(_{8 × 600}), and the output samples of the network are the Jacobian matrix (_{1}(_{2}(_{600}(_{48 × 600}). The learning laws with the gradient descent method [

The schematic of the proposed global-state-space IBVS is shown in

First, the Jacobian for the global mapping relationship between the vision space and the robot workspace is approximated using the ENN. As mentioned in last section, it is not necessary to have a very low approximation error during offline training. But in the testing phase, the supposed desired Jacobian

The initial state of the robot is very important to robot stability manipulation. The authors in [^{j}^{–n+1}… ^{j}^{j}^{–n–1}… ^{j}

In contrast, in this paper, at the initial time

The robust KF estimator (mentioned in Section 3.2) is used to estimate the desired Jacobian

The control law should be employed to drive the robot from its present pose to its desired pose. The image error, as shown in ^{χ}P_{E}^{+}(

The simulation data consist of four-feature-points, which are used for robot manipulation. The desired features vector ^{*}(_{i}_{i} ν_{i}^{T}.

Supposing the linear velocity of the end-effector is _{x}_{y}_{z}^{T}_{x}_{y}_{z}^{T}^{T}_{6 × 1}. According to

According to _{ik}

An eye-in-hand simulation environment was set up to test the performance of the proposed method. Camera movement covers linear, rotational movement, as well as the combination of the translational and rotational movements in the workspace. In simulation, we consider only two difficult tasks, which consist of four cases (combined movement and rotation movement). The evaluation goals of Cases 1 and 2 are test the feature trajectories and the camera trajectory performances of PBVS, IBVS, and our method. Cases 3 and 4 test the performance's global stability and robustness for both KBF and our method.

Case 1 involves a combination of the translational and rotational movements of the camera. The results are illustrated in

Case 2 involves a pure camera rotational movement around the optical axis. The feature points obtained by our method (

Therefore, compared with the IBVS and PBVS methods, our approach utilizes the advantages of the PBVS method to improve the camera motion trajectory, and takes advantages of the IBVS method to constrain the feature trajectories to avoid situations where features points leave the FOV.

Furthermore, because for PBVS, planar homography is used for pose estimation of the object with respect to the end-effector, and because IBVS is associated with Jacobian computation, both methods are sensitive to camera calibration error. In works [

Case 3 deals with dynamic performances in a noisy environment. This case examines the stability of our method, in comparison with the KBF method [^{−3}, were added to the state and observation models (the additive noises shown in

When the variance is 9 × 10^{−3}, the camera pose in the Cartesian space and the feature trajectories on the image plane for this situation are given in ^{−2}, The KBF method failure to have end-effector positioning,

Case 4 deals with dynamic performances due to system destabilization. During the actual robot manipulation, the statistical characteristics of model noise and observation noise are variational, which leads to system destabilization. For simplicity, but without loss of generality, we consider the statistical parameters ^{*}(^{*}(_{48 × 48} and ^{*}(_{8 × 8} are chosen for the covariance of noises, where

The dynamic performance of the KBF method is shown in

On the other hand, different values of Q(

The experimental results have been carried out using eye-in-hand configurations (

Experiments for the following cases were performed: Case 1 deals with pure rotational movement of the camera, with initial features ^{*}(^{*}(^{*}(

The experimental results are shown in

In the experiment, our approach direct control of the feature points on the image plane for robotic task manipulation, in other words, the proposed method of observing the changing of features on image plane direct control to the robot converging toward the desired pose with six-degree-of-freedom. As described in Section 2, if the feature errors converge toward zero that meaning is the robot successful achieve the task manipulation. Therefore, the experimental results can validate the proposed method.

In this work, a new global-state-space IBVS scheme for uncalibrated model-independent robot manipulation in an eye-in-hand configuration is discussed. Here, a robust KF cooperates with ENN learning techniques so as to ensure robust stability in global-state-space with respect to image features within the camera field-of-view (FOV). Also, the image Jacobian matrix is estimated without requiring camera parameters and depth information. Therefore, our method avoids the corrupted performances caused by calibration and modeling errors. Through various simulation and experimental results through IBVS, PBVS, KBF and our method, we have shown that our approach takes the advantages of PBVS to improve the camera moving trajectory, and takes the advantages of IBVS to avoid the loss of image features. Finally, in comparison with the KBF method, our method is robust despite outside and system noises.

This work was supported in part by the Fujian Provincial Natural Science Foundation of China (NO.2010J05141).

The authors declare no conflict of interest.

The general structure of uncalibrated model-independent VS system.

The structure of a robust KF for Jacobian matrix estimation. We introduce a filtering-revise-vector _{(}_{k}_{)} to improve the robustness of the filter's performance with respect to universal dynamic noises.

The framework of a feedback Elman neural network with its three layer structure.

The scheme of global-state-space IBVS, which is attached to a Robust KF Cooperate with ENN learning. Online weights update to ensure robotic global stability manipulation.

Results obtained by our method for Case 1. The sampling interval is 0.1, and the control rate

Results obtained by the PBVS method for Case 1. The intrinsic camera parameters are chosen as _{0} = _{0} = 256 and _{u}_{v}

Results obtained by the IBVS method for Case 1. The camera parameters are the same as in the PBVS method. The feature trajectories are constrained on the FOV, but the camera trajectory becomes slightly odd,

Comparison of our method with the PBVS method for Case 2. The feature trajectories are constrained on the FOV by our method, while the results of PBVS almost leave the FOV. (

Comparison of our method with KBF for Case 3. The additive noises are set at zero mean, and variances are set at 1 × 10^{−3}. (

Comparison of our method with KBF for Case 3. The additive noises are set at zero mean, and the variances are set at 9 × 10^{−3}. (

Results obtained by the KBF method for Case 4. (

Results obtained by our method for Case 4. (

(

Experimental results obtained by our method for Case 1 rotational movement. (

Experimental results obtained by our method for Case 2 translational movement. (

Experimental results obtained by our method for Case 3 translational and rotational movement. (