Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models

Aiming at the time-varying uncertainties of robot and camera models in IBUVS (image-based uncalibrated visual servo) systems, a finite-time adaptive controller is proposed based on the depth-independent Jacobian matrix. Firstly, the adaptive law of depth parameters, kinematic parameters, and dynamic parameters is proposed for the uncertainty of a robot model and a camera model. Secondly, a finite-time adaptive controller is designed by using a nonlinear proportional differential plus a dynamic feedforward compensation structure. By applying a continuous non-smooth nonlinear function to the feedback error, the control quality of the closed-loop system is improved, and the desired trajectory of the image is tracked in finite time. Finally, using the Lyapunov stability theory and the finite-time stability theory, the global finite-time stability of the closed-loop system is proven. The experimental results show that the proposed controller can not only adapt to the changes in the EIH and ETH visual configurations but also adapt to the changes in the relative pose of feature points and the camera’s relative pose parameters. At the same time, the convergence rate near the equilibrium point is improved, and the controller has good dynamic stability.


Introduction
Intelligent robots with sensing abilities have been recognized as the mainstream trend in robot development. Among many robot sensors, a vision sensor has become one of the most important due to its large amount of information, wide range of applications, and non-contact characteristics [1]. A vision sensor can increase the adaptability of a robot to the surrounding environment and expand its application field. This idea directly gives birth to robot vision servo control technology [2]. Robot visual servo control is the use of visual sensors to indirectly detect the current posture of the robot or the relative posture of the robot to the target object; on this basis, the robot positioning control or trajectory tracking is realized. Thus, robot visual servo control makes an important control means of the robot system [3].
A robot vision servo system includes the following two parts: a robot system and a vision system. Before operation, they need to carry out system calibration, which includes camera calibration, robot calibration, and robot and camera relative position calibration (also known as hand-eye calibration). The performance of the traditional robot visual servo system is highly dependent on the calibration accuracy, which, in many cases, is limited by the following: (1) Calibration results are valid only under calibration conditions, and re-calibration is required when the system structure changes slightly. (2) In many working conditions, the parameters of the system calibration may change slowly. (3) Due to camera The contribution and innovation of this paper are mainly reflected as follows: (1) In the uncalibrated robot visual servo control system, based on the comprehensive consideration of uncertain dynamics, unknown kinematics, and time-varying depth information, a finite-time adaptive control scheme is proposed to solve the global finite-time trajectory tracking problem of the robot manipulator. Compared with references [13,14], the controller considers more unknown parameters of the vision robot, and the convergence speed is also significantly improved. (2) For the problem of parameter uncertainty, three adaptive laws are designed to achieve accurate estimation of kinematics, dynamics, and depth uncertainty parameters. On this basis, a vision tracking control scheme based on a depth-free Jacobian matrix is proposed. Compared with references [6][7][8], the decoupling of depth parameter and Jacobian matrix is realized in this paper. Compared with the reference [13,14], an adaptive law is specially designed to accurately estimate the uncertain dynamic parameters of the robot. (3) Compared with references [19,20], to solve the problem that the spatial velocity of an image is difficult to accurately measure, we define a new vector composed of joint space velocity and reference joint velocity and use the adaptive law to estimate the inverse dynamics of the system. (4) In the design of the control rate and controller, we propose a scheme of image-free space velocity design and extend the finite-time stability control to solve time-varying nonlinear systems with multiple uncertain parameters. Compared with references [21,22], the proposed controller has fast convergence. (5) A notable difference from [24] is that the proposed control scheme extends the asymptotic stability results to finite-time stability. The asymptotic stability control scheme can be regarded as a special case of the FTS scheme when exponent α = 1.
The rest of this paper is organized as follows: Kinematic analysis of an image-based uncalibrated visual servo system is presented in Section 2 and includes "Differential kinematics of a visual servo in an ETH configuration" and "Differential kinematics of a visual servo in an EIH configuration". Section 3 discusses the control model of the manipulator based on dynamics. Section 4 describes the design and stability analysis of a finite-time tracking controller. Sections 5 and 6 present the results of the experiment and the final conclusions of the study, respectively.

Differential Kinematics of the Visual Servo in an ETH Configuration
In the IBVS system, the depth parameter z is coupled to the image Jacobian matrix in reciprocal form, as shown in Equation (1). Where z i is the depth value of the i-th image feature point, meanwhile, u = u−u 0 f k u , v = (v − v 0 )sinθ/ f k u , this makes it difficult to process and estimate the depth parameter. To solve this problem, the depth parameter needs to be decoupled from the Jacobian matrix.
Based on the analysis of the kinematic relation of the camera, the position vector x c i (t) ∈ R 3×1 of the feature point P in the three-dimensional coordinate system of the camera and the image coordinate vector y i (t) ∈ R 2×1 meet the following relation.
where X e i is the position vector of feature points in the three-dimensional coordinate system at the end of the manipulator. By substituting Equation (7) into Equation (5), the complete differential kinematic relation of the visual servo in ETH configuration can be obtained as follows: .
where q(t), . q(t) ∈ R n×1 represent the joint Angle vector and joint velocity vector of the manipulator, respectively. R b e , P b e ∈ SO(3) represent the forward kinematic rotation matrix and translation vector of the manipulator, respectively. Row vectors m T 1 , m T 2 , m T 3 ∈ R 1×3 represent the first, the second, and third rows of the matrix M c b , respectively. Matrix M c b ∈ R 3×3 represents the perspective projection matrix from the camera image plane to the reference coordinate system of the manipulator base. Which is specifically defined as: where R c b is the rotation part in the camera external parameter matrix T c e , subscript e represents the manipulator end-effector coordinate system, and superscript c represents the camera coordinate system. Thus, the depth-independent Jacobian matrix in the ETH configuration can be derived as follows: As can be seen from the above formula, the depth-independent Jacobian matrix does not contain the depth parameters of feature points, thus achieving decoupling from the depth parameters. In addition, by differentiating the depth parameter z i (t), the differential kinematic relation of depth can be obtained as follows:

Differential Kinematics of the Visual Servo in EIH Configuration
In EIH configuration, we also focus on the conversion relationship T c b SO(3) between camera coordinate system and base coordinate system, including the camera pose matrix T c e and manipulator kinematic transformation T e b (t). Since the camera is installed on the end-effector in EIH configuration, the pose relationship T c e is constant. By differentiating Equation (12), the differential kinematic relation of EIH configuration can be obtained, as shown in Equation (13).
where x b i R 3×1 is the coordinate vector of feature points in the base coordinate system. By substituting Equation (13) into Equation (5), we get the following formula: where vectors m T 1 , m T 2 , and m T 3 are the first, second, and third row vectors of the matrix M c e R 3×3 respectively, and the perspective projection matrix M c e is specifically defined as: where R c e is the rotation part in the camera external parameter matrix T e , subscript e represents the manipulator end-effectors coordinate system, and superscript c represents the camera coordinate system. Thus, the depth-independent Jacobian matrix in EIH configuration can be deduced as follows: and /∂q. By comparing the differential kinematic relations (10) and (16) of ETH and EIH configurations, it is not difficult to find that the depth-independent Jacobian matrix D i and vector d T i in different visual configurations have similar mathematical descriptions. Therefore, the two configurations can be unified as follows: where The complete mapping model of the visual system (17) is rewritten as (18).
where T(t) is the forward kinematic coordinate transformation matrix of the manipulator; M R 3×4 is the perspective projection equivalent matrix.
In the visual servo system, the visual mapping relationships of different configurations can be uniformly represented by Equation (18), but the physical meanings of the representations of each matrix and vector are different, as shown in Table 1.
The depth parameters of the two configurations can be unified as follows: In the formula m T 3 R 1×4 is the third-row vector of the matrix M. Therefore, Equations (8) and (14) can be unified as follows: . where The matrices P and R are the rotation and translation parts of the kinematic transformation matrix of the manipulator, respectively. m T 1 , m T 2 , m T 3 R 1×3 are row vectors of the first, second, and third rows of matrix M, respectively. The specific expressions of matrix M in different visual configurations are given in Table 1.
By taking the derivative of Equation (19) with respect to time, the differential relationship between the depth parameter and joint space can be obtained as follows: where Equations (18)- (23) can be regarded as a unified differential kinematic framework. During system analysis, with the help of the unified kinematic model of the visual servo system, it is not necessary to pay attention to the configuration of the visual servo system but only to configure the corresponding parameters according to Table 1.

Control Model of a Manipulator Based on Dynamics
According to Lagrange mechanics, the dynamic equation of the manipulator system can be given by the following formula: q are joint velocity and acceleration vectors, respectively; H(q) R n×n is the inertial matrix; C q, . q R n×n is the Coriolis matrix; g(q) R n×1 is the gravitational torque; τ is the control torque exerted on the robot joints; and is the design variable of the dynamics controller. The kinetic Equation (24) has the following properties.

Property 1 ([28]
). H(q) is a symmetric positive definite matrix, and there are normal numbers α 1 , α 2 , h 1 , h 2 , so that the following formula is true.

Property 2 ([28]). C q,
. q is the antisymmetric matrix, that is, for any vector ζ R n×1 , the following equation is true.
where k is the appropriate positive constant.
where g 0 is the appropriate positive constant.

Property 5 ([24]
). Equation (24) can be linearly parameterized into the following equation by selecting the kinetic parameter θ d R p×1 with the appropriate dimension.
ξ) R n×p is the regression matrix, n is the number of joint angles, and p is the number of unknown parameters.

Design and Stability Analysis of a Finite Time Tracking Controller
Define the image tracking error ∆y = y − y d , ∆ y d − λ∆y (30) where λ R is the undetermined constant. According to Property 5, the Lagrange dynamic equation of the manipulator with unknown parameters can be linearized as follows: whereθ d is the estimated value of the unknown parameter vector, which is estimated online by the undetermined adaptive rate. According to Equation (31), the dynamic estimation error can be linearized as follows: where ∆θ d =θ d − θ d , the matrix H,Ĥ, C,Ĉ are abbreviations for H(q),Ĥ(q), C q, q , respectively. Based on Equations (21) and (23), the compensation depth Jacobian matrix Q is constructed as follows: where α 1 ∈ R is the constant to be determined. The estimated value of Q is as follows, For the adaptive Jacobian scheme, the depth-independent Jacobian matrix D and its correlation vector d T have the following important properties (as evidenced in Appendices A-C).

Property 6.
For any vector η R n×1 , the matrix product Dη can be expressed as a linearized form of the unknown parameter vector θ k .
where Y k,1 (y, q, η) R 2×p 1 is a regression matrix independent of the unknown parameter vector θ k R p 1 ×1 , the dimension p 1 ≤ 36.

Property 7.
For any vector η R n×1 , the product d T η can be expressed as a linearized form of the unknown parameter θ k .
where Y k,2 (q, η) ∈ R 1×p 1 is a regression vector independent of the unknown parameter vector θ k , and the dimension p 1 ≤ 36.
Because the depth independent Jacobian matrix has nothing to do with the depth parameter, it is necessary to compensate for the depth parameter when designing the system. Property 8. The depth z has the following linear parameterized form: where Y z (q) ∈ R 1×p 2 is a regression vector independent of the unknown parameter vector θ z ∈ R p 2 ×1 , and dimension p 2 satisfies p 2 ≤ 13.
According to Properties 6-8, the linear parameterized form of the compensated Jacobian matrix estimate Q is derived as follows: where Y k y, y d , q, . q = Y k,1 y, q, Based on the estimated value of the compensated Jacobian matrix, the reference velocity vector of the joint is defined as follows: . q r =ẑQ + . y r whereẑ is the estimated value of the depth parameter, andQ + is the pseudo-inverse ofQ, which is determined by the following equation: The joint sliding mode variable is constructed according to the joint reference velocity . q r defined in Equation (29).
where s q ∈ R n×1 . Figure 1 shows the design structure diagram of the uncalibrated visual servo tracking control system. Based on the above analysis, the IBUVS finite-time tracking control law is proposed as follows: where ∈ ℝ × and ∈ ℝ × are the undetermined gain matrix, , ∈ ℝ is the unde- Similarly, the estimation error − and depth estimation error ̂− of the compensated Jacobian matrix are expressed linearly as follows: For the unknown parameter vector estimation , , , the following adaptive law is proposed: According to Equations (42), (48)-(50), the error dynamic equations of the closedloop system can be summarized as follows: Based on the above analysis, the IBUVS finite-time tracking control law is proposed as follows: where K s ∈ R n×n and K y ∈ R 2×2 are the undetermined gain matrix, α 1 , α 2 ∈ R is the undetermined constant, and sig( * ) α is a nonlinear function defined by the following formula.
Equation (33) has the following properties [29]: Similarly, the estimation errorQ − Q and depth estimation errorẑ − z of the compensated Jacobian matrix are expressed linearly as follows: For the unknown parameter vector estimationθ d ,θ k ,θ z , the following adaptive law is proposed: Sensors 2023, 23, 7133 to avoid confusion with the feature 3D coordinate vector x, the total state vector is represented by the symbol According to Equations (42) and (48)-(50), the error dynamic equations of the closedloop system can be summarized as follows: (19), (20) and (24), under the action of finite time tracking control law and adaptive law shown in Equations (42) and (48)-(50), if the constants and gain parameters selected meet the following sufficient conditions: λ > 0, K s ∈ R n×n and K y ∈ R 2×2 are a positive definite symmetric matrix; ψ d , ψ k , ψ z is a positive definite symmetric matrix with proper dimensions; 0 < α 1 < 1, α 2 = 2α 1 1+α 1

Theorem 1. For the system shown in Equations
is the global finite time stability of the closed-loop system that can be guaranteed in the sense of Formula (52).

Proof of Global Asymptotic Stability of Closed-Loop Systems
The following formula can be derived from the sliding mode vector in Equation (41).
Similarly, taking the derivative of V 2 (x) and V 3 (x) along the trajectory of the system (51) yields: The following formula can be derived from Equations (56)-(58): .
Since C is an antisymmetric matrix, that is, x = 0, the following formula can be derived by substituting it into Equation (59): According to sufficient conditions in the above theorem and Equation (60), It is not difficult to derive x 5 is bounded. Therefore, the estimateŝ θ d ,θ k ,θ z are also bounded, and we can get the boundedness ofẑ,d T ,D. In the same way, we can obtain the boundedQ by means of Formula (33). According to the sign function definition shown in Equation (43), it can be inferred that sig (s q ) α 2 and sig(∆y) α 1 are bounded. In addition, it can be inferred from the bounded of . V. Since the Formula (60) is a continuous, non-smooth function and its derivative cannot be obtained directly, it is necessary to discuss its uniform continuity in sections. By taking the derivative of V in stages, the following equation can be derived: Through the above analysis and ..
. s q is bounded. Moreover, by substituting the boundedness of z, (61), we obtain the boundedness of ..  V(x) can be derived from the following formula: V is uniformly continuous. According to Barbalat lemma, when t → 0, . V → 0, and s q → 0, ∆y → 0, the consistent continuity of ∆ . y can be given by: ..

Lemma 1 ([30]
). Considering the following system: where f (x) is an n-dimensional continuous homogeneous vector field with k < 0 with respect to the is a continuous vector field. Suppose x=0 is the asymptotically stable equilibrium point of the system For any x D = {x R n | x ≤ δ}, δ > 0 is uniformly true, then x = 0 is the locally finite time stable equilibrium points for the system (62).
V(x, t) is uniformly continuous for time t

Lemma 3 ([32,33]
). If a system is globally asymptotically stable and locally finite-time convergent, then it is globally finite-time stable.
Lemma 1 can be modified into the following form: . . ,f n (x)) is a continuous vector field. System (51) can be rewritten as: Let the expansion coefficient homogeneous vector field with −1 < k = α 2 − 1 < 0, with respect to the expansion coefficient (r 1 , r 2 , r 4 , r 5 ). By examining eachf i in the continuous vector fieldf (x), we can easily get the following varieties. For any x ∈ D = {x ∈ R n | x ≤ δ}, δ > 0, the following formulas exist.
According to Lemma 1, the system (51) is locally finite-time stabilized. From Lemma 3 and the global asymptotic stability and local finite-time stability of the system (51), it can be deduced that the closed-loop system (51) is globally finite-time stable.

Experimental Platform
The effectiveness of the proposed IBUVS finite time tracking control scheme is verified by experiments. The experimental hardware platform is composed of a camera, manipulator, and control platform, as shown in Table 2. Table 3 lists the D-H parameters of the Kinova MICO robot manipulator. The hardware system of the visual servo experiment platform is shown in Figure 2.  Table 3 lists the DH parameters.
According to Lemma 1, the system (51) is locally finite-time stabilized. From Lemma 3 and the global asymptotic stability and local finite-time stability of the system (51), it can be deduced that the closed-loop system (51) is globally finite-time stable.

Experimental Platform
The effectiveness of the proposed IBUVS finite time tracking control scheme is verified by experiments. The experimental hardware platform is composed of a camera, manipulator, and control platform, as shown in Table 2. Table 3 lists the D-H parameters of the Kinova MICO robot manipulator. The hardware system of the visual servo experiment platform is shown in Figure 2.   In EIH configuration, the camera Logitech C310 is fixed to the end of the MICO manipulator with adaptive firmware to avoid image jitter when the manipulator moves. The visual feature marker in the experiment is a characteristic color plate composed of four color blocks: red, green, blue, and yellow. The position of feature points C1-C4 in the reference coordinate system of the color plate is as follows: Given initial estimates of parameters, an adaptive algorithm is used for online iterative estimation to achieve convergence of system errors. The following aspects were specifically considered in the experiment to verify the adaptability of the IBUVS scheme: In EIH visual configuration, the depth independent Jacobian adaptive estimation module (S function) is Get-Adaptive-Depth-Independent-Jacobian, and the depth parameter adaptive estimation module (S function) is Get-Adaptive-Depth. We need to set the input parameters of the above two functions as T e b (t) represents the pose transformation from the end-effector reference coordinate system to the base reference coordinate system. Where the parameter x b i to be estimated describes the position of the feature points with respect to the base coordinate system and M c e is equivalent to the product of the pose relationship between the camera and the end-effector coordinate system and the internal parameter matrix. Since the IBUVS scheme does not need to know the following parameters to be estimated in advance, it is not necessary to set them (including different 3D poses of feature points, different internal imaging parameters of the camera, different external pose parameters of the camera, and different visual configurations (EIH and ETH)).
In terms of visual configuration, the actual pose of the camera relative to the endeffector reference coordinate system is as follows: To investigate the adaptability and flexibility of the system to the three-dimensional pose parameters of feature points, the reference coordinate system of the feature color palette adopts the following three sets of data:  Figure 3, it is not difficult to find that the IBUVS control algorithm proposed in this paper not only completes the visual servo task but also has good three-dimensional trajectory characteristics through the three-dimensional trajectory of the camera mounted on the robot arm. Figures 4, 5, 8 and 9, respectively show the error curve and image track of feature points in pose 1 and pose 3, as well as the angular velocity response of each joint, joint sliding mode variable S q and torque output of each joint in pose 3. Figure 6 shows the convergence of some elements (θ k,1 − θ k,12 ) in the estimationθ k of kinematic unknown parameters in the pose 1 experiment, and Figure 7 shows the convergence of some elements (θ d,1 − θ d, 8 ) in the estimationθ d of dynamical unknown parameters in the pose 1 experiment.  Figure 3, it is not difficult to find that the IBUVS control algorithm proposed in this paper not only completes the visual servo task but also has good three-dimensional trajectory characteristics through the three-dimensional trajectory of the camera mounted on the robot arm. Figures 4, 5, 8 and 9, respectively show the error curve and image track of feature points in pose 1 and pose 3, as well as the angular velocity response of each joint, joint sliding mode variable and torque output of each joint in pose 3. Figure 6 shows the convergence of some elements ( , − , ) in the estimation of kinematic unknown parameters in the pose 1 experiment, and Figure 7 shows the convergence of some elements ( , − , ) in the estimation of dynamical unknown parameters in the pose 1 experiment.   It can be observed from the above experimental results that, at the beginning of the servo task, the Jacobian matrix determined by the IBUVS control scheme in this paper according to the initial estimation parameters has a large deviation from the actual Jacobian matrix, resulting in the system being far away from the equilibrium point. This situation is further aggravated when the initial estimation deviates greatly from the actual bian matrix, resulting in the system being far away from the equilibrium point. This situ-ation is further aggravated when the initial estimation deviates greatly from the actual value. However, with the continuous increase of the control period, the parameters to be estimated in the system are iterated along the negative gradient direction of the image error quantity and converge to a set of constant values proportional to the true value (as shown in Figures 6 and 7). Currently, the Jacobian estimated matrix approaches the actual Jacobian matrix, and the image space error gradually converges. The above experimental results verify the adaptability of the IBUVS scheme in EIH configuration to uncalibrated parameters such as 3D pose, feature color palette, and camera internal parameters.   The above experimental results verify the adaptability of the IBUVS scheme in EIH configuration to uncalibrated parameters such as 3D pose, feature color palette, and camera internal parameters.   When EIH is switched to ETH, the input parameters of the depth-independe bian adaptive estimation module (Get-Adaptive-Depth-Independent-Jacobian) a depth-parameter adaptive estimation module (Get-Adaptive-Depth) should be sw to ( ), i.e., the pose transformation from the base reference coordinate system end-effector reference coordinate system is realized. At the same time, the contr should be adjusted appropriately according to the actual initial configuration and point selection. In addition to the above steps, no other function parameters nee adjusted in this IBUVS scheme.
The configuration of ETH is shown in Figure 2b. The LogitechC920 is selected fixed camera in this experiment. The reference coordinate system of the camera ado following two groups of different poses; the three-dimensional space trajectory of effector is shown in Figure 10.  approximate the actual system Jacobian. The space error of the image converges gradu-ally, and finally the motion along the desired trajectory of the image is realized. The above two groups of experiments show that the IBUVS scheme proposed in this paper can still effectively complete the visual servo task under the condition that the camera imaging model, the relative posture of the camera and the manipulator, and the posture of the characteristic color plate are quite different.  It can be observed from the above experimental results that, at the beginning of the servo task, the Jacobian matrix determined by the IBUVS control scheme in this paper according to the initial estimation parameters has a large deviation from the actual Jacobian matrix, resulting in the system being far away from the equilibrium point. This situation is further aggravated when the initial estimation deviates greatly from the actual value. However, with the continuous increase of the control period, the parameters to be estimated in the system are iterated along the negative gradient direction of the image error quantity and converge to a set of constant values proportional to the true value (as shown in Figures 6 and 7). Currently, the Jacobian estimated matrix approaches the actual Jacobian matrix, and the image space error gradually converges.
The above experimental results verify the adaptability of the IBUVS scheme in EIH configuration to uncalibrated parameters such as 3D pose, feature color palette, and camera internal parameters.
To further verify the adaptability of the scheme to the internal imaging parameters and external pose parameters of different cameras, visual servo experiments under the ETH configuration will continue.
When EIH is switched to ETH, the input parameters of the depth-independent Jacobian adaptive estimation module (Get-Adaptive-Depth-Independent-Jacobian) and the depthparameter adaptive estimation module (Get-Adaptive-Depth) should be switched to T e b (t), i.e., the pose transformation from the base reference coordinate system to the end-effector reference coordinate system is realized. At the same time, the control gain should be adjusted appropriately according to the actual initial configuration and feature point selection. In addition to the above steps, no other function parameters need to be adjusted in this IBUVS scheme.
The configuration of ETH is shown in Figure 2b. The LogitechC920 is selected as the fixed camera in this experiment. The reference coordinate system of the camera adopts the following two groups of different poses; the three-dimensional space trajectory of its end-effector is shown in Figure 10.

Experiment 2. Verify the fast convergence of schemes (42) and (48)-(50) near the equ point.
The convergence rate is the key index to evaluate the performance of the IBU troller. In visual servo, when there is a large difference between the initial attitude desired attitude and the system has parameter estimation error, pose estimation er calculation delay of the output control quantity, to ensure the stability of the system the pose difference is large, the IBUVS controller often adopts a small control gain directly leads to the slow convergence rate of the system near the equilibrium po To fully verify the fast convergence of the proposed IBUVS controller (her referred to as IBUVS-F) near the equilibrium point, the IBUVS asymptotically st controller (hereinafter referred to as IBUVS-A) proposed in the literature [33] was as a comparison scheme in this experiment. In addition, an adaptive gain function * exp ( ) ) is presented in the open source visual servo platform ViSP whos tive gain can be used to improve IBUVS-A, and another comparison scheme structed, abbreviated as IBUVS-AAG. To quantitatively evaluate the differences vergence time of the above three schemes, it is stipulated in this experiment that w average modulus of error of four image feature points is less than 10 pixels, the can be convergent, and the convergence time is taken as the quantitative index. The three-dimensional pose of the characteristic color plate relative to the reference coordinate system of the end-effector is as follows: Similarly, the above pose parameters do not need to be set in the IBUVS controller function. In ETH configuration, when the camera is placed in two different poses, the 3D trajectory curve of the end-effector of the manipulator can complete the visual servo task well and drive the feature color plate fixed at the end of the manipulator to move along the desired trajectory of the image, as shown in Figure 10. Under different camera positions and poses, image error curves and image tracks of feature points are shown in Figures 11 and 12, respectively. Similar to the EIH configuration, under the action of the IBUVS controller, the feature points appear to have different degrees of jitter and deviation from the equilibrium point at the beginning of the servo task. However, with the increase in the control period, the parameters to be estimated in the system will be iterated along the negative gradient of the image error, driving the estimated Jacobian matrix to approximate the actual system Jacobian. The space error of the image converges gradually, and finally the motion along the desired trajectory of the image is realized. The above two groups of experiments show that the IBUVS scheme proposed in this paper can still effectively complete the visual servo task under the condition that the camera imaging model, the relative posture of the camera and the manipulator, and the posture of the characteristic color plate are quite different. as a comparison scheme in this experiment. In addition, an adaptive gain function ( ( ) = * exp ( ) ) is presented in the open source visual servo platform ViSP whose adaptive gain can be used to improve IBUVS-A, and another comparison scheme is constructed, abbreviated as IBUVS-AAG. To quantitatively evaluate the differences in convergence time of the above three schemes, it is stipulated in this experiment that when the average modulus of error of four image feature points is less than 10 pixels, the system can be convergent, and the convergence time is taken as the quantitative index.  The convergence rate is the key index to evaluate the performance of the IBUVS controller. In visual servo, when there is a large difference between the initial attitude and the desired attitude and the system has parameter estimation error, pose estimation error, and calculation delay of the output control quantity, to ensure the stability of the system, when the pose difference is large, the IBUVS controller often adopts a small control gain, which directly leads to the slow convergence rate of the system near the equilibrium point.
To fully verify the fast convergence of the proposed IBUVS controller (hereinafter referred to as IBUVS-F) near the equilibrium point, the IBUVS asymptotically stabilized controller (hereinafter referred to as IBUVS-A) proposed in the literature [33] was selected as a comparison scheme in this experiment. In addition, an adaptive gain function (λ(x) = a * exp(bx) + c) is presented in the open source visual servo platform ViSP whose adaptive gain can be used to improve IBUVS-A, and another comparison scheme is constructed, abbreviated as IBUVS-AAG. To quantitatively evaluate the differences in convergence time of the above three schemes, it is stipulated in this experiment that when the average modulus of error of four image feature points is less than 10 pixels, the system can be convergent, and the convergence time is taken as the quantitative index.
Considering the difference in the gain coefficient of different schemes, to make IBUVS-F comparable with IBUVS-A and IBUVS-AAG, gains with similar control torque output ranges were combined into one comparison group. Specifically, 7 groups of image error term gain coefficients of IBUVS-A, IBUVS-AAG, and IBUVS-F controllers were taken for comparison, as shown in Table 4.  Each scheme in each group was run for at least 1500 control cycles (i.e., 49.5 s) when comparing tests. In addition, to not lose generality, the experiment was repeated five times for each scheme in each group, and the average of the five results was taken as the convergence time for comparison. The comparison results of three schemes under different gain conditions are shown in Table 4 and Figure 13.

Contrast Group
The comparative experimental results show that the deviation between the convergence times of the three schemes is small when a larger control gain is applied. However, the convergence time of IBUVS-F schemes is significantly less than that of IBUVS-A and IBUVS-AAG schemes as the control gain decreases gradually. Figure 14 shows the image error convergence curves of the three schemes in the sixth comparison test group. In the actual control process, the use of a larger control gain can effectively reduce the convergence time. However, when the pose difference is large, the output torque of the controller is large, which makes it easy to cause jitter and rotation of the joint of the manipulator. Especially when there is a large pose difference along the Z-axis of the camera, the feature points are easy to leave the field of view of the arm-based camera, thus leading to the failure of the visual servo task. Figure 15 shows the comparison of the control conditions of IBUVS-A and IBUVS-F when three groups of larger gains are taken. Different degrees of servo task failure occurred in the two schemes in ten independent experiments. When the gain continued to increase, two schemes failed to complete the servo control task in 10 experiments. Considering the difference in the gain coefficient of different schemes, to ma IBUVS-F comparable with IBUVS-A and IBUVS-AAG, gains with similar control torqu output ranges were combined into one comparison group. Specifically, 7 groups of ima error term gain coefficients of IBUVS-A, IBUVS-AAG, and IBUVS-F controllers were take for comparison, as shown in Table 4.
Each scheme in each group was run for at least 1500 control cycles (i.e., 49.5 s) whe comparing tests. In addition, to not lose generality, the experiment was repeated five tim for each scheme in each group, and the average of the five results was taken as the co vergence time for comparison. The comparison results of three schemes under differe U V   The comparative experimental results show that the deviation between the convergence times of the three schemes is small when a larger control gain is applied. However, the convergence time of IBUVS-F schemes is significantly less than that of IBUVS-A and IBUVS-AAG schemes as the control gain decreases gradually. Figure 14 shows the image error convergence curves of the three schemes in the sixth comparison test group. In the actual control process, the use of a larger control gain can effectively reduce the convergence time. However, when the pose difference is large, the output torque of the controller is large, which makes it easy to cause jitter and rotation of the joint of the manipulator. Especially when there is a large pose difference along the Z-axis of the camera, the feature points are easy to leave the field of view of the arm-based camera, thus leading to the failure of the visual servo task. Figure 15 shows the comparison of the control conditions  On the other hand, a small control gain can ensure that the feature points in the image space have a good error convergence curve so that the manipulator moves along a smooth three-dimensional trajectory to ensure the reliability of visual servo control. In the initial pose adjustment stage, the IBUVS-F scheme proposed in this paper can effectively reduce the torque output, keep the manipulator attitude stable, and significantly shorten the convergence time, which contributes to the improvement of the control quality of IBUVS. figuration, the above two schemes are adopted to carry out visual servo tracking experiments. In the experiment, the three-dimensional motion trajectory curves of the armmounted camera of the two schemes are shown in Figure 16. The experimental results show that there are obvious differences between the two schemes. The IBUVS-F trajectory is close to a straight line, while the IBUVS-A trajectory is an S-shaped curve. The initial output torque of the IBUVS-A scheme is too large, which easily leads to the failure of the visual servo control task.  Figure 17 shows the joint sliding mode variable responses of IBUVS-F at 0.008 and 0.03 gain coefficients and of the IBUVS-AAG scheme at 0.3. The joint sliding mode variable gradually approaches zero with the convergence of image errors. It is worth noting that there is a certain degree of high-frequency joint angular velocity response in the sliding mode space of the IBUVS-F joint after system convergence. This is caused by the large control output of the IBUVS-F scheme near the equilibrium point. Compared with the IBUVS-AGG scheme, which also has a convergence time of about 7 s, it is not difficult to see that the high-frequency response level of the angular velocity of the IBUVS-AGG scheme is similar to that of the IBUVS-F scheme. Although the rapid convergence of IBUVS-F near the equilibrium point requires a certain amount of joint space noise, it still has advantages over other schemes. The above analysis proves the effectiveness and superiority of the proposed IBUVS-F scheme.  On the other hand, a small control gain can ensure that the feature points in the image space have a good error convergence curve so that the manipulator moves along a smooth three-dimensional trajectory to ensure the reliability of visual servo control. In the initial pose adjustment stage, the IBUVS-F scheme proposed in this paper can effectively reduce the torque output, keep the manipulator attitude stable, and significantly shorten the convergence time, which contributes to the improvement of the control quality of IBUVS. Figure 15 shows the comparison of the control conditions of the two schemes when the three sets of large gains are taken. It can be seen that the two schemes have different degrees of servo task failure in ten independent experiments, and when the gain continues to increase, the two schemes have failed to complete ten experiments.

Contrast Group
As can be seen from Table 4, when the convergence time is about 7 s, the gain coefficient of the IBUVS-A scheme is 0.35 and that of the IBUVS-F scheme is 0.03. In EIH configuration, the above two schemes are adopted to carry out visual servo tracking experiments. In the experiment, the three-dimensional motion trajectory curves of the arm-mounted camera of the two schemes are shown in Figure 16. The experimental results show that there are obvious differences between the two schemes. The IBUVS-F trajectory is close to a straight line, while the IBUVS-A trajectory is an S-shaped curve. The initial output torque of the IBUVS-A scheme is too large, which easily leads to the failure of the visual servo control task. As can be seen from Table 4, when the convergence time is about 7 s, the gain coefficient of the IBUVS-A scheme is 0.35 and that of the IBUVS-F scheme is 0.03. In EIH configuration, the above two schemes are adopted to carry out visual servo tracking experiments. In the experiment, the three-dimensional motion trajectory curves of the armmounted camera of the two schemes are shown in Figure 16. The experimental results show that there are obvious differences between the two schemes. The IBUVS-F trajectory is close to a straight line, while the IBUVS-A trajectory is an S-shaped curve. The initial output torque of the IBUVS-A scheme is too large, which easily leads to the failure of the visual servo control task.  Figure 17 shows the joint sliding mode variable responses of IBUVS-F at 0.008 and 0.03 gain coefficients and of the IBUVS-AAG scheme at 0.3. The joint sliding mode variable gradually approaches zero with the convergence of image errors. It is worth noting that there is a certain degree of high-frequency joint angular velocity response in the sliding mode space of the IBUVS-F joint after system convergence. This is caused by the large control output of the IBUVS-F scheme near the equilibrium point. Compared with the IBUVS-AGG scheme, which also has a convergence time of about 7 s, it is not difficult to see that the high-frequency response level of the angular velocity of the IBUVS-AGG scheme is similar to that of the IBUVS-F scheme. Although the rapid convergence of IBUVS-F near the equilibrium point requires a certain amount of joint space noise, it still has advantages over other schemes. The above analysis proves the effectiveness and superiority of the proposed IBUVS-F scheme.   Figure 17 shows the joint sliding mode variable responses of IBUVS-F at 0.008 and 0.03 gain coefficients and of the IBUVS-AAG scheme at 0.3. The joint sliding mode variable gradually approaches zero with the convergence of image errors. It is worth noting that there is a certain degree of high-frequency joint angular velocity response in the sliding mode space of the IBUVS-F joint after system convergence. This is caused by the large control output of the IBUVS-F scheme near the equilibrium point. Compared with the IBUVS-AGG scheme, which also has a convergence time of about 7 s, it is not difficult to see that the high-frequency response level of the angular velocity of the IBUVS-AGG scheme is similar to that of the IBUVS-F scheme. Although the rapid convergence of IBUVS-F near the equilibrium point requires a certain amount of joint space noise, it still has advantages over other schemes. The above analysis proves the effectiveness and superiority of the proposed IBUVS-F scheme.

Conclusions
Based on the adaptive Jacobian method, a visual servo finite-time control scheme for an uncalibrated manipulator is proposed in this paper. By designing a finite-time controller and proposing the adaptive law of depth parameters, kinematics parameters, and dynamics parameters, the finite time tracking of the desired trajectory of the image is realized. The finite-time tracking controller has a nonlinear proportional differential plus dynamic feed-forward compensation structure (NPD+), which can improve the control quality of the closed-loop system by applying continuous non-smooth nonlinear functions to the feedback errors. By means of Lyapunov stability theory and finite time stability theory, the global finite time stabilization of closed loop systems is proved. Compared with the existing schemes, the experimental results show that the uncalibrated visual servo controller proposed in this paper can not only adapt to the changes in EIH and ETH visual configuration but also adapt to the parameter changes in the relative pose of the feature point and the relative pose of the camera. At the same time, the convergence rate near the equilibrium point is improved effectively, and it has better dynamic stability. In the dynamic equation of the robot arm system (Equation (24)), we use the linear parameterization method to separate the unknown uncertainty parameters, and on this basis, we design the corresponding adaptive rate to estimate them. The effect of this method on the

Conclusions
Based on the adaptive Jacobian method, a visual servo finite-time control scheme for an uncalibrated manipulator is proposed in this paper. By designing a finite-time controller and proposing the adaptive law of depth parameters, kinematics parameters, and dynamics parameters, the finite time tracking of the desired trajectory of the image is realized. The finite-time tracking controller has a nonlinear proportional differential plus dynamic feed-forward compensation structure (NPD+), which can improve the control quality of the closed-loop system by applying continuous non-smooth nonlinear functions to the feedback errors. By means of Lyapunov stability theory and finite time stability theory, the global finite time stabilization of closed loop systems is proved. Compared with the existing schemes, the experimental results show that the uncalibrated visual servo controller proposed in this paper can not only adapt to the changes in EIH and ETH visual configuration but also adapt to the parameter changes in the relative pose of the feature point and the relative pose of the camera. At the same time, the convergence rate near the equilibrium point is improved effectively, and it has better dynamic stability. In the dynamic equation of the robot arm system (Equation (24)), we use the linear parameterization method to separate the unknown uncertainty parameters, and on this basis, we design the corresponding adaptive rate to estimate them. The effect of this method on the dynamic estimation error of parameters needs to be further studied.  Institutional Review Board Statement: The paper does not involve human or animal research.

Informed Consent Statement:
The study did not involve humans.

Data Availability Statement:
The data that support the findings of this study are included within the article.

Conflicts of Interest:
The authors declare no conflict of interest.