Next Article in Journal
The Compactness of Right Inverse of Imaginary Part of Reggeon Field Theory Hamiltonian on Bargmann Space
Previous Article in Journal
Event-Driven Average Estimation with Dispersion-Tolerant Poisson–Inverse Gaussian Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Finite-Time Convergent Neurodynamics Approach for Precise Grasping of Multi-Finger Robotic Hand

1
School of Intelligent Systems Engineering, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China
2
Guangdong Provincial Key Laboratory, Fire Science and Intelligent Emergency Technology, Guangzhou 510006, China
3
General Embodied AI Center, Sun Yat-sen University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(23), 3823; https://doi.org/10.3390/math13233823
Submission received: 30 September 2025 / Revised: 20 November 2025 / Accepted: 24 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Mathematical Methods for Intelligent Robotic Control and Design)

Abstract

The multi-finger robotic hand exhibits significant potential in grasping tasks owing to its high degrees of freedom (DoFs). Object grasping results in a closed-chain kinematic system between the hand and the object. This increases the dimensionality of trajectory tracking and substantially raises the computational complexity of traditional methods. Therefore, this study proposes the discrete finite-time convergent neurodynamics (DFTCN) algorithm to address the aforementioned issue. Specifically, a time-varying quadratic programming (TVQP) problem is formulated for each finger, incorporating joint angle and angular velocity constraints through log-sum-exp (LSE) functions. The TVQP problem is then transformed into a time-varying equation system (TVES) problem using the Karush–Kuhn–Tucker (KKT) conditions. A novel control law is designed, employing a three-step Taylor-type discretization for efficient implementation. Theoretical analysis verifies the algorithm’s stability and finite-time convergence property, with the maximum steady-state residual error being O ( τ 3 ) . Numerical simulations illustrate the favorable convergence and high accuracy of the DFTCN algorithm compared with three existing dominant neurodynamic algorithms. The real-robot experiments further confirm its capability for precise grasping, even in the presence of camera noise and external disturbances.

1. Introduction

The multi-finger robotic hand (robotic hand), with high degrees of freedom (DoFs), has shown significant potential in complex manipulation tasks and garnered widespread attention [1]. The main advantage of the robotic hand lies in its ability to imitate the intricate movements and manipulation capabilities of the human hand. This allows it to perform tasks such as grasping [2], transporting [3], and fine manipulation [4] through the synergistic movements of multiple joints. Robotic hands can grasp objects of various shapes and weights and can achieve high-precision trajectory tracking. Their high DoFs enable adaptation to a dynamic environment, significantly expanding the boundaries of robotics applications. Numerous methods have been explored for robotic hand grasping tasks. These include reinforcement learning [5,6,7], imitation learning [8,9], diffusion policies [10], and tactile-based dexterous manipulation learning [11]. Trajectory tracking is a core component of robotic hand grasping tasks. It fundamentally involves the precise control of the motion trajectories of each joint and fingertip. This enables a smooth and accurate transition from the initial pose to the target grasping pose, even in complex environments.
In the application of a robotic hand, the inherently high DoFs give rise to a vast configuration space and intricate joint couplings. This complexity exacerbates the computational burden of trajectory planning. It also increases the risk of slow convergence and amplified trajectory tracking errors. This ultimately results in deviations in the final grasping posture. Most existing trajectory tracking schemes for robotic hands involve deep learning and reinforcement learning [12,13,14]. Luo’s work on quadruped robots offers valuable engineering insights for error suppression in robotic hand drive systems [15,16]. It also addresses model uncertainties in multi-finger gripping tasks. Notably, Peng et al. proposed a deep learning-based method for autonomous, real-time digital meter reading in natural scenes. The resulting perception framework can be adapted for robotic hand visual servoing systems [17]. In ref. [18,19], Luo et al. conducted in-depth research on high-DoF mechanisms. The effectiveness of high-DoF lower limb devices and supernumerary limbs was also validated. Their findings provide a theoretical reference for collaborative modeling of robotic hands. Furthermore, Peng et al. developed a multi-objective optimization-based visual servo control method for endoscope-holding robots. The method is highly relevant to the closed-loop control of dexterous manipulators [20]. Cheng et al. proposed a hierarchical planning framework for flexible robot operations [21]. The framework utilizes contact to explore the flexibility of the hand and the external environment. This approach helps overcome challenges such as surface roughness, complex fine movements, and constantly changing scenes. Ma et al. proposed a manipulation method based on adaptive finger joint gait, which dynamically adjusts the gaps of the basic actions in real time to ensure the force/moment balance of the object [22]. The proposed method illustrates strong robustness in handling such disturbances when the object is subjected to external disturbance forces. In ref. [23], Ma et al. introduced a vision-tactile fusion grasping control framework. High-resolution tactile sensors are used to perceive contact states in real time, while visual localization is integrated to achieve closed-loop control. In ref. [24], Li et al. proposed the neural Jacobian field algorithm. It employs deep learning to infer Jacobian matrices in real time from visual data, achieving sub-millimeter accuracy. This algorithm addresses the model mismatch caused by mechanical wear and sensor noise. In ref. [25], Altiner et al. proposed a robust control framework based on scenario convex optimization and employed pole placement techniques to ensure stability during the dynamic grasping process. This solution is particularly effective for grasping curved objects. In ref. [26], Kim et al. proposed a two-step method that combines diverse basic shape elements with a trained deep learning network. When grasping previously unseen objects, the method achieved a 93% average success rate and a tenfold increase in object recognition speed. Although the algorithms above have shown encouraging results in grasping trajectory tracking, the high DoFs of robotic hands still significantly increase the computational burden of trajectory planning. This often leads to slow convergence rates and amplified tracking errors. Further research into efficient trajectory tracking schemes capable of handling high-dimensional dynamic characteristics remains crucial.
Since its proposal in 2002 [27], the zeroing neurodynamics (ZND) algorithm has been extensively studied. Compared with traditional dynamics algorithms [28,29], the ZND algorithm provides a theoretical framework for efficiently solving trajectory tracking problems in robotic arms. The fundamental idea is to force the error function to converge rapidly to zero via a zeroing strategy of dynamic systems [30,31,32]. The amalgamation of ZND and quadratic programming effectively addresses high-dimensional problem-solving [33,34]. In ref. [35], Li et al. proposed a novel algorithm called Anti-Noise Zero Point Neural Dynamics (NTZN). This method provides a more precise neural dynamics method for solving the dynamic nonlinear least squares problems in noisy environments. NTZN ensures convergence under both small residual and large residual conditions. This model applies to solving motion control problems of robotic arms with joint constraints. In ref. [36], Liu et al. established an upper limb–exoskeleton coupling dynamics model based on human active motion intention to mitigate human–robot conflict in rehabilitation. Their method illustrates superior accuracy and robustness under non-ideal conditions compared to ZND, GNN, and PD controllers. In ref. [37], Qiu et al. proposed a five-step discrete-time zeroing neurodynamic algorithm for future constrained quadratic programming. The algorithm integrates a continuous-time model with a multi-step discretization rule, which theoretically guarantees convergence and high precision. Validation in mobile robot motion control further showcases its practical utility. In ref. [38], Abbassi et al. proposed a higher-order ZND algorithm for solving the time-varying quaternion matrix inverse problem across quaternion, complex, and real domains. This model achieves faster convergence than the conventional ZND strategy, as validated in robotic motion tracking experiments. In ref. [39], Jin et al. proposed two types of robust ZND algorithms featuring fixed-time convergence and strong anti-noise capability. Numerical simulations validated their effectiveness in solving high-dimensional time-varying matrix equations. Experiments with robotic manipulators further confirmed the models’ stable performance. In ref. [40], Chen et al. proposed a new continuous-time QR decomposition model. Both theoretical analysis and numerical simulations confirmed the correctness and effectiveness of the proposed continuous and discrete ZND algorithms.
Although the ZND algorithm and its extended algorithms have been successfully applied to robotic arm trajectory tracking, significant limitations remain. Existing models struggle to address high-dimensional and multi-constrained trajectory tracking problems for robotic hands. Firstly, these models rely on the solution of differential equations. This approach imposes a heavy computational burden at high sampling frequencies and often leads to performance degradation in discrete control systems. Secondly, their convergence speed depends on the integration step size and time constant. These models tend to show asymptotic rather than finite-time convergence, which limits their ability to achieve a fast dynamic response. Thirdly, discretizing the system often introduces significant steady-state residual errors because of low-order discrete approximations. As a result, it becomes challenging to ensure accuracy and stability when using longer sampling gaps. To overcome these limitations, this study proposes the discrete finite-time convergent neurodynamics (DFTCN) algorithm. The improvements are achieved in three main aspects: (1) A three-step Taylor-type discretization formula is introduced, allowing the continuous model to be directly constructed in discrete form. This approach avoids cumulative errors and computational overhead caused by numerical integration, thereby enabling efficient discrete control. (2) A nonlinear control law based on power functions is designed to ensure that errors converge strictly to zero within finite time, rather than merely approaching zero asymptotically. This design significantly improves the theoretical response speed. (3) Theoretical analysis illustrates that DFTCN achieves a maximum steady-state residual error of O ( τ 3 ) , which guarantees stability and high accuracy even with relatively long sampling gaps. As a result, the DFTCN algorithm retains the finite-time convergence capability of ZND-type algorithms and simultaneously ensures high steady-state accuracy. Specifically, the main contributions of this study are as follows.
(1)
The DFTCN algorithm is proposed, which is capable of effectively solving the trajectory tracking problem for a robotic hand with high DoFs. Detailed theoretical derivation and analysis are provided to support the validity and performance of the DFTCN algorithm.
(2)
The log-sum-exp (LSE) function is introduced to merge the inequality constraints of joint angles and angular velocities during the construction of the time-varying quadratic programming (TVQP) problem for each finger. This approach further improves the finite-time convergence and stability of the DFTCN algorithm.
(3)
Numerical simulations are performed to verify the convergence and accuracy of the proposed algorithm in solving the trajectory tracking problem. Furthermore, real-robot experiments are conducted in which the robotic hand successfully grasps and transports a tissue box. These results illustrate the DFTCN algorithm’s capability to achieve precise grasping in real robotic deployment.
This study is organized as follows. Section 2 formulates the TVQP problem and presents the construction of the novel DFTCN algorithm for a robotic hand. In addition, rigorous theoretical analysis of the stability, finite-time convergence, and accuracy of the DFTCN algorithm are provided. Section 3 presents the numerical simulations and real-robot experiments. Section 4 provides a summary of this study.

2. Design and Analysis of DFTCN Algorithm

This section first introduces the DFTCN algorithm to address the trajectory tracking problem for a robotic hand with high DoFs. The overall framework is shown in Figure 1. For the TVQP problem, the LSE function is integrated with joint angle constraints to establish formulations for both single-finger and multi-finger scenarios. The Karush–Kuhn–Tucker (KKT) conditions are then applied to transform the TVQP problem into a time-varying equation system (TVES). A control law is subsequently designed to guarantee finite-time convergence. The control law is discretized using a three-step Taylor-type formula to construct the DFTCN algorithm. This approach achieves sub-millimeter accuracy. In the real-robot implementation, the entire process is illustrated, from object point cloud acquisition to grasping task execution. These experiments validate the engineering feasibility of the algorithm.

2.1. TVQP Problem

In this study, the trajectory tracking problem of a robotic hand with high DoFs is transformed into a TVQP problem. Due to the high similarity in modeling among the fingers, the sub-optimization problem for each finger is formulated as a sub-TVQP problem. These sub-problems are then aggregated into an integrated TVQP problem for the entire system. The following subsection introduces the modeling of a single finger.
For the i-th finger with n i DoFs, the subproblem is described as the following optimization problem:
min . θ ˙ ( i ) ( t ) 1 2 θ ˙ ( i ) T ( t ) I n i × n i θ ˙ ( i ) ( t ) + λ 1 ( i ) θ ( i ) ( t ) θ ( i ) ( 0 ) T θ ˙ ( i ) ( t ) ,
s . t . J ( i ) ( θ ( i ) ( t ) ) θ ˙ ( i ) ( t ) = υ ˙ I ( i ) ( t ) λ 2 ( i ) υ R ( i ) ( t ) υ I ( i ) ( t ) ,
θ ( i ) θ ( i ) ( t ) θ ( i ) + ,
θ ˙ ( i ) θ ˙ ( i ) ( t ) θ ˙ ( i ) + ,
where θ ( i ) ( t ) = [ θ 1 ( i ) ( t ) , θ 2 ( i ) ( t ) , , θ n i ( i ) ( t ) ] R n i denotes the joint angles of the finger; I n i × n i is the n i -dimensional identity matrix; J ( i ) ( θ ( i ) ( t ) ) R 3 × n i denotes the position Jacobian matrix of the end-effector of the finger; υ R ( i ) ( t ) R 3 and υ I ( i ) ( t ) R 3 denote the actual position and the desired position of the end-effector of the finger, respectively; θ ( i ) R n i and θ ( i ) + R n i denote the lower and upper boundaries of the joint angles of the finger, respectively; θ ˙ ( i ) R n i and θ ˙ ( i ) + R n i denote the lower and upper limits of the joint angular velocities of the finger, respectively; λ 1 ( i ) , λ 2 ( i ) > 0 .
Due to the limited DoFs of a single finger, only the position of the end-effector constraint (2) is considered as the equality constraint. For the inequality constraint (3), the first-order Zhang equivalency formula [41] is employed. Thus, (3) can be formulated in terms of θ ˙ ( i ) ( t ) by using variable bounds:
λ 3 ( i ) θ ( i ) θ ( i ) ( t ) θ ˙ ( i ) ( t ) λ 3 ( i ) θ ( i ) + θ ( i ) ( t ) ,
where λ 3 ( i ) > 0 .
To merge the two inequality constraints (5) and (4), the following formula is employed:
f u ( x , y ) = 1 u ln ( e u x + e u y ) ,
where u 0 is a large positive scalar parameter, and  x , y R . In the LSE function (6), when u is sufficiently large, the return value of f u ( x , y ) would approach the maximum value of x and y; conversely, the return value of f u ( x , y ) is expected to approach the minimum value of x and y.
To illustrate the performance differences between the LSE function (6) and the max/min function, their comparison results on key indicators are presented in Figure 2. The max/min function can select the desired value. However, its non-differentiability at piecewise points poses a challenge. In contrast, the continuous differentiability of the LSE function provides smooth constraint conditions for the optimization problem. This property facilitates the finite-time convergence and stability of the DFTCN algorithm.
The combination of the two inequality constraints (4) and (5) with the LSE function (6) leads to:
ζ ( i ) ( t ) = f u ( λ 3 ( i ) [ θ ( i ) θ ( i ) ( t ) ] , θ ˙ ( i ) ) = 1 u ln ( e u λ 3 ( i ) [ θ ( i ) θ ( i ) ( t ) ] + e u θ ˙ ( i ) ) ,
ζ ( i ) + ( t ) = f u ( λ 3 ( i ) [ θ ( i ) + θ ( i ) ( t ) ] , θ ˙ ( i ) + ) = 1 u ln ( e u λ 3 ( i ) [ θ ( i ) + θ ( i ) ( t ) ] + e u θ ˙ ( i ) + ) .
Remark 1. 
LSE Smoothing Parameter u balances the constraint fidelity and the control smoothness. Larger u closely approximates the hard max/min constraints with strengthened enforcement. However, it also steepens the boundary derivatives, which risks control spikes that harm stability. A smaller u softens the constraints to reduce spikes, yet raises the boundary overstepping risks, which weaken the convergence robustness. Objective weights λ 1 ( i ) , λ 2 ( i ) , and λ 3 ( i ) shape TVQP trade-offs and link directly to the convergence and stability. λ 1 ( i ) penalizes joint deviation, where larger values reduce angular drift to boost stability but may retard convergence. While smaller values of λ 1 ( i ) speed up trajectory tracking, they also increase angular drift and the risk of violating constraints. λ 2 ( i ) is the weight on the position error, where larger values accelerate error reduction to aid convergence, but can induce aggressive velocities and oscillations that harm stability. Conversely, smaller values slow convergence and risk precision loss. λ 3 ( i ) adjusts the boundaries of joint angle-velocity constraints. Larger values tighten the boundary velocity limits, which helps prevent overshoot and enhance stability, but may slow down convergence. Smaller values relax the limits to speed up convergence, yet raise the risk of hardware protection issues.
Therefore, by integrating Equations (7) and (8), the TVQP problem for the i-th finger is formulated as
min . θ ˙ ( i ) ( t ) 1 2 θ ˙ ( i ) T ( t ) I n i × n i θ ˙ ( i ) ( t ) + q ( i ) T ( t ) θ ˙ ( i ) ( t ) , s . t . J ( i ) ( t ) θ ˙ ( i ) ( t ) = d ( i ) ( t ) , ζ ( i ) ( t ) θ ˙ ( i ) ( t ) ζ ( i ) + ( t ) ,
where q ( i ) ( t ) and d ( i ) ( t ) are defined as
q ( i ) ( t ) = λ 1 ( i ) θ ( i ) ( t ) θ ( i ) ( 0 ) R n i , d ( i ) ( t ) = υ ˙ I ( i ) ( t ) λ 2 ( i ) υ R ( i ) ( t ) υ I ( i ) ( t ) R 3 .
The above bounded inequality constraints are transformed into unilateral inequalities:
min . θ ˙ ( i ) ( t ) 1 2 θ ˙ ( i ) T ( t ) I n i × n i θ ˙ ( i ) ( t ) + q ( i ) T ( t ) θ ˙ ( i ) ( t ) , s . t . J ( i ) ( t ) θ ˙ ( i ) ( t ) = d ( i ) ( t ) , A ( i ) ( t ) θ ˙ ( i ) ( t ) ζ ( i ) ( t ) ,
where A ( i ) ( t ) and ζ ( i ) ( t ) are defined as
A ( i ) ( t ) = I n i × n i I n i × n i R 2 n i × n i , ζ ( i ) ( t ) = ζ ( i ) + ( t ) ζ ( i ) ( t ) R 2 n i .
Then, the following standard TVQP problem for trajectory tracking of a robotic hand is formulated as
min . θ ˙ ( t ) 1 2 θ ˙ ( t ) T I n × n θ ˙ ( t ) + q T ( t ) θ ˙ ( t ) , s . t . J ( t ) θ ˙ ( t ) = d ( t ) , A ( t ) θ ˙ ( t ) ζ ( t ) ,
where θ ˙ ( t ) , q ( t ) , J ( t ) , d ( t ) , A ( t ) , ζ ( t ) , and n are defined as
θ ˙ ( t ) = θ ˙ ( 1 ) ( t ) θ ˙ ( 2 ) ( t ) θ ˙ ( 3 ) ( t ) θ ˙ ( 4 ) ( t ) θ ˙ ( 5 ) ( t ) T R n , q ( t ) = q ( 1 ) ( t ) q ( 2 ) ( t ) q ( 3 ) ( t ) q ( 4 ) ( t ) q ( 5 ) ( t ) T R n , J ( t ) = J ( 1 ) ( t ) 0 0 0 0 0 J ( 2 ) ( t ) 0 0 0 0 0 J ( 3 ) ( t ) 0 0 0 0 0 J ( 4 ) ( t ) 0 0 0 0 0 J ( 5 ) ( t ) R 15 × n , d ( t ) = d ( 1 ) ( t ) d ( 2 ) ( t ) d ( 3 ) ( t ) d ( 4 ) ( t ) d ( 5 ) ( t ) T R 15 , A ( t ) = A ( 1 ) 0 0 0 0 0 A ( 2 ) 0 0 0 0 0 A ( 3 ) 0 0 0 0 0 A ( 4 ) 0 0 0 0 0 A ( 5 ) R 2 n × n ,
ζ ( t ) = ζ ( 1 ) ( t ) ζ ( 2 ) ( t ) ζ ( 3 ) ( t ) ζ ( 4 ) ( t ) ζ ( 5 ) ( t ) T R 2 n , n = i = 1 5 n i .

2.2. Control Law Design

Building on the TVQP problem formulation, the control law of the DFTCN algorithm is developed now. To handle the constrained optimization efficiently, the TVQP problem is transformed into a TVES problem via KKT conditions, establishing an equation-based framework for the DFTCN algorithm.
The transformation of the TVQP problem into a solvable TVES problem is achieved by applying the KKT conditions. This approach transforms the constrained optimization problem into a set of equations that define optimality. The key challenge is the presence of non-smooth complementarity conditions, which are resolved by introducing a smooth perturbed mapping. Specifically, the perturbed Fisher–Burmeister (PFB) function [34] is introduced to handle the inequality constraints, and is defined as
ψ PFB ( a , b ) = a + b a a + b b + ε , ε 0 + ,
where a , b , and ε are all column vectors. Combining KKT conditions with the PFB function yields a fully differentiable TVES problem. The DFTCN algorithm efficiently solves for precise tracking.
The resulting TVES problem is obtained:
H ( t ) x ( t ) + g ( t ) = 0 ,
where H ( t ) , x ( t ) and g ( t ) are defined as
H ( t ) = I n × n J T ( t ) A T ( t ) J ( t ) 0 15 × 15 0 15 × 2 n A ( t ) 0 2 n × 15 I 2 n × 2 n R ω × ω ,
x ( t ) = θ ˙ ( t ) λ ( t ) μ ( t ) R ω , g ( t ) = q ( t ) d ( t ) r ( t ) R ω ,
and r ( t ) , v ( t ) and ω are defined as
r ( t ) = ζ ( t ) + v ( t ) v ( t ) + μ ( t ) μ ( t ) + ε , ε 0 + , v ( t ) = ζ ( t ) A ( t ) θ ˙ ( t ) , ω = 15 + 3 n , λ ( t ) R 15 , μ ( t ) R 2 n ,
where symbol ∘ denotes the Hadamard product [42].
From the TVES problem (10), the equations can be rewritten as the following error equation:
e ( t ) = H ( t ) x ( t ) + g ( t ) .
The first-order derivative of e ( t ) is:
e ˙ ( t ) = d e ( t ) d t = D ( t ) x ˙ ( t ) + V ( t ) x ( t ) + ϱ ( t ) ,
where D ( t ) , V ( t ) and ϱ ( t ) are defined as
D ( t ) = I n × n J T ( t ) A T ( t ) J ( t ) 0 15 × 15 0 15 × 2 n l 1 ( t ) I 2 n × 2 n A ( t ) 0 2 n × 15 I 2 n × 2 n l 2 ( t ) R ω × ω , V ( t ) = 0 n × n J ˙ T ( t ) 0 n × 2 n J ˙ ( t ) 0 15 × 15 0 15 × 2 n 0 2 n × n 0 2 n × 15 0 2 n × 2 n R ω × ω , ϱ ( t ) = θ ˙ ( t ) d ˙ ( t ) I 2 n × 2 n l 1 ( t ) ζ ˙ ( t ) R ω ,
and l 1 ( t ) , l 2 ( t ) and Ω ( t ) are defined as
l 1 ( t ) = Λ Ω ( t ) v ( t ) , Λ ( · ) : R 2 n R 2 n × 2 n is diagonal matrix operator , l 2 ( t ) = Λ Ω ( t ) μ ( t ) , Ω ( t ) = v ( t ) v ( t ) + μ ( t ) μ ( t ) + ε 1 2 .
Using Equation (12), the preceding expressions are equivalently reformulated as the following dynamical equation:
e ˙ ( t ) = ϕ ( e ( t ) , t ) ,
where ϕ ( e ( t ) , t ) is the control law.
Substituting Equation (13) into Equation (12) yields
D ( t ) x ˙ ( t ) + V ( t ) x ( t ) + ϱ ( t ) = ϕ ( e ( t ) , t ) .
Then, the state-space model is given by
x ˙ ( t ) = D ( t ) ϕ ( e ( t ) , t ) V ( t ) x ( t ) ϱ ( t ) ,
where the symbol denotes the pseudo-inverse operator, D ( t ) is the pseudo-inverse matrix of D ( t ) .
Remark 2. 
The existence and numerical stability of D ( t ) depend on key conditions tied to the structure of D ( t ) and the algorithm design. For the existence, D ( t ) maintains a time-invariant rank throughout the task interval. This stems from the column-full-rank property of two key matrices: the Jacobian J ( t ) and the constraint transformation matrix A ( t ) . For the numerical stability, the submatrices of D ( t ) remain bounded. Physical joint limits ensure the boundedness of the Jacobian J ( t ) . The matrix A ( t ) has entries only in { 0 , ± 1 } . The terms l 1 ( t ) and l 2 ( t ) are kept bounded by the smooth function ζ ( t ) and joint velocity limits. These properties together prevent abnormal amplification of D ( t ) entries. Additionally, the LSE smoothing and a small positive regularization ε reduce the spectral condition number of D ( t ) . At the same time, the third-order Taylor-type discretization preserves both the rank and boundedness of D ( t ) in discrete implementation. These measures help maintain the stability of the system.
Since the design of ϕ ( e ( t ) , t ) does not incorporate a time-dependent term that is independent of e ( t ) , the notation is simplified to ϕ ( e ( t ) ) .
The control law ϕ ( e ( t ) ) in the DFTCN algorithm is formulated as
ϕ novel ( e ( t ) ) = γ 1 τ e ( t ) ψ abs α 1 ( e ( t ) ) ,
where ψ abs ( · ) is the element-wise absolute value operator, τ > 0 is the time gap, γ 1 > 0 , and 0 < α < 1 .
Remark 3. 
The condition γ 1 > 0 ensures the global exponential stability. Based on the relevant study [43] and the tracking precision requirements, the empirical range of γ 1 is recommended as [ 0.1 , 10 ] . The parameter γ 1 regulates the error convergence rate. A larger value results in a faster convergence rate, but increases the risk of oscillations. In contrast, a smaller value significantly slows down the convergence rate. Therefore, the value of γ 1 is set to 5. The parameter α satisfies 0 < α < 1 as required for finite-time stability [44]. The values of α outside this range lead to infinite convergence time. In simulations, α = 0.5 is chosen to balance the initial correction speed and the oscillation risk. The sampling gaps τ { 0.01 s , 0.02 s } correspond to the robotic hand’s nominal 100 Hz frequency. A smaller τ yields faster convergence and smaller steady-state error, whereas an excessively large τ may cause error accumulation and oscillations.

2.3. Discrete DFTCN Algorithm

To keep the steady-state residual of the DFTCN algorithm within a reasonable range, this subsection focuses on its control law. The law is then discretized using a three-step Taylor-type formula. Then, an Euler-type discrete neurodynamics (ETDN) algorithm is derived to serve as a baseline for subsequent performance evaluation.
The discrete form of Equation (14) is derived as
x ˙ k = D k ϕ ( e k ) V k x k ϱ k ,
where the subscript k denotes the value of the variable at time t = k τ .
Based on [45], the three-step general Taylor-type discretization formula is depicted as
x ˙ k = 2 x k + 1 3 x k + 2 x k 1 x k 2 τ + O ( τ 2 ) ,
where O ( τ 2 ) R ω is a vector composed of O ( τ 2 ) elements.
By integrating Equations (17) and (16) and rearranging, one obtains
x k + 1 = 1 2 ( 3 x k 2 x k 1 + x k 2 ) + τ D k ϕ ( e k ) V k x k ϱ k + O ( τ 3 ) .
Then, by substituting the control law (15) into Equation (18), the DFTCN algorithm is designed as
x k + 1 = 1 2 ( 3 x k 2 x k 1 + x k 2 ) + D k γ 1 e k ψ abs α 1 ( e k ) τ V k x k + ϱ k + O ( τ 3 ) .
The algorithm (19) is detailed in Algorithm 1, which includes the initialization parameters, iterative update rules, and termination conditions.
For comparative analysis in subsequent experiments, the ETDN algorithm is proposed. It utilizes the zeroing neurodynamic approach [46] and is derived from the Euler difference formula, which is written as
x k + 1 = ˙ x k + D k γ 1 e k τ V k x k + ϱ k ,
where symbol = ˙ denotes the computational assignment operation.
Algorithm 1 DFTCN Algorithm
  • Input: Sampling gap τ , convergence parameter γ 1 , power exponent α , initial states x , maximum iteration K
  • Output: Joint velocity solution θ ˙ ( t )
  • 1: for  k = 1  to K do
  • 2:    Compute H k , g k H ( t ) x ( t ) + g ( t )
  • 3:    Compute error e k = H k x k + g k
  • 4:    Compute D k , V k , ϱ k D ( t ) ( ϕ ( e ( t ) , t ) V ( t ) x ( t ) ϱ ( t ) )
  • 5:    Compute pseudo-inverse D k
  • 6:    Compute ϕ k = γ 1 τ e k | e k | α 1
  • 7:    Compute x k + 1 = 1 2 3 x k 2 x k 1 + x k 2 + τ D k ϕ k V k x k ϱ k
  • 8:    Compute joint velocity θ ˙ k = I n × n , 0 n × 15 , 0 n × 2 n x k
  • 9:    Compute joint angle θ k = θ k 1 + τ θ ˙ k
  • 10:    if  min ( θ k ) < θ or max ( θ k ) > θ +  then
  • 11:       θ k = clip ( θ k , θ , θ + )
  • 12:       θ ˙ k = ( θ k θ k 1 ) / τ
  • 13:    end if
  • 14:    if  min ( θ ˙ k ) < θ ˙ or max ( θ ˙ k ) > θ ˙ +  then
  • 15:       θ ˙ k = clip ( θ ˙ k , θ ˙ , θ ˙ + )
  • 16:    end if
  • 17:     x k 2 = x k 1 , x k 1 = x k
  • 18: end for
Remark 4. 
The three-step Taylor-type discretization is adopted instead of the first-order Euler method [47], as it delivers higher temporal accuracy and stronger stability. In particular, it reduces error accumulation by retaining higher-order derivatives. For the DFTCN algorithm, which experiences significant temporal variation and cannot use infinitesimal step sizes, this multi-step scheme provides tighter control over error propagation. It also allows moderately larger steps with only a small increase in per-step cost.

2.4. Theoretical Analysis

Theoretical proofs for the stability, finite-time convergence, and accuracy of the DFTCN algorithm (19) are provided below.
To establish the fundamental properties of the DFTCN algorithm, the stability behavior is first investigated. The following theorem provides a rigorous analysis of the global asymptotic stability (i.e., global convergence) of the algorithm.
Theorem 1. 
Consider the dynamical equation of the DFTCN algorithm:
e ˙ ( t ) = γ 1 τ e ( t ) ψ abs α 1 ( e ( t ) ) ,
this system is globally asymptotically stable at the equilibrium point e = 0 .
Proof. 
Let the i-th element of e ( t ) be denoted by e i ( t ) , for i = 1 , 2 , , ω , Equation (20) becomes
e i ˙ ( t ) = γ 1 τ e i ( t ) ψ abs α 1 ( e i ( t ) ) .
Since each element e ˙ i ( t ) of e ˙ ( t ) in Equation (20) depends only on the corresponding e i ( t ) , the problem reduces to proving global asymptotic stability at the equilibrium point e i = 0 . To establish this, a formal proof is given as follows.
Consider the Lyapunov function:
V ( e i ( t ) ) = e i 2 ( t ) 2 .
According to Equation (21), the first-order derivative of V ( e i ( t ) ) is given by
V ˙ ( e i ( t ) ) = d d t V ( e i ( t ) ) = e i ( t ) e i ˙ ( t ) = e i ( t ) · γ 1 τ e i ( t ) | e i ( t ) | α 1 = γ 1 τ | e i ( t ) | α + 1 0 ,
with equality if and only if e i ( t ) = 0 .
Furthermore, the following can be derived under the radial unbounded condition:
lim | e i | V ( e i ( t ) ) = .
In summary, according to the Lyapunov stability theorem, Equation (21) is globally asymptotically stable at the equilibrium point e i = 0 . Consequently, Equation (20) is globally asymptotically stable at the equilibrium point e = 0 . □
Building upon the stability analysis, the finite-time convergence properties of the DFTCN algorithm are now examined. The following theorem provides a formal proof of its finite-time convergence.
Theorem 2. 
Equation (20) converges to the equilibrium point e = 0 in finite time.
Proof. 
From the proof of Theorem 1, it follows that
V ˙ ( e i ( t ) ) = γ 1 τ | e i ( t ) | α + 1 = γ 1 τ | e i 2 ( t ) | ( α + 1 ) / 2 = 2 ( α + 1 ) / 2 γ 1 τ V ( α + 1 ) / 2 ( e i ( t ) ) ,
expressing V ˙ ( e i ( t ) ) in differential form and rearranging terms yields
d V ( e i ( t ) ) V ( α + 1 ) / 2 ( e i ( t ) ) c d t ,
where c is defined as
c = 2 ( α + 1 ) / 2 γ 1 τ .
Integrating both sides of the inequality yields
V ( e i ( 0 ) ) V ( e i ( t 1 ) ) V ( α + 1 ) / 2 ( e i ( t ) ) d ( V ( e i ( t ) ) ) c 0 t 1 d t ,
which leads to the following inequality:
1 α 2 1 V ( 1 α ) / 2 ( e i ( t ) ) V ( e i ( 0 ) ) V ( e i ( t 1 ) ) c t 1 ,
rearranging the above inequality yields
V ( 1 α ) / 2 ( e i ( t 1 ) ) V ( 1 α ) / 2 ( e i ( 0 ) ) 1 α 2 c t 1 .
The convergence time T c is determined by setting t 1 = T c and V ( e i ( T c ) ) = 0 , yielding
0 V ( 1 α ) / 2 ( e i ( 0 ) ) 1 α 2 c T c ,
which can be reformulated as
T c 2 ( 1 α ) c V ( 1 α ) / 2 ( e i ( 0 ) ) = 2 ( 1 α ) / 2 τ ( 1 α ) γ 1 V ( 1 α ) / 2 ( e i ( 0 ) ) = τ | e i ( 0 ) | 1 α ( 1 α ) γ 1 .
Therefore, the convergence time of Equation (20) is upper bounded by τ ( 1 α ) 1 γ 1 1 max i = 1 , 2 , , ω | e i ( 0 ) | 1 α . □
The analysis is further extended to focus on the accuracy of the DFTCN algorithm in solving the TVQP problem. The following theorem provides a detailed characterization of the algorithm’s accuracy in terms of its steady-state residual error.
Theorem 3. 
The actual solution of the DFTCN algorithm converges to the theoretical solution of the TVQP problem, which exhibits a maximal steady-state residual error lim k e k + 1 2 of order O ( τ 3 ) .
Proof. 
According to the DFTCN algorithm, the theoretical solution x k + 1 * of the TVQP problem is given by
x k + 1 * = x k + 1 + O ( τ 3 ) .
Furthermore, the theoretical solution x k + 1 * satisfies
H k + 1 x k + 1 * + g k + 1 = 0 .
Thus, it follows that
lim k e k + 1 2 = lim k H k + 1 x k + 1 + g k + 1 2 = lim k H k + 1 x k + 1 * + g k + 1 H k + 1 O ( τ 3 ) 2 = lim k H k + 1 O ( τ 3 ) 2 = O ( τ 3 ) .
Equation (22) shows that the theoretical solution of the TVQP problem, attained by the DFTCN algorithm, exhibits a maximal steady-state residual lim k e k + 1 2 of order O ( τ 3 ) . Hence, the proof is complete. □

3. Simulation and Experiment Verification

In this section, numerical simulation results confirm that the DFTCN algorithm delivers favorable overall performance in solving the trajectory tracking problem of a robotic hand with high DoFs. Moreover, the real-robot experiments further verify that this algorithm also illustrates precise performance in the actual object grasping process of this problem.

3.1. Numerical Simulations for Robotic Hand

To verify the convergence, accuracy, and robustness of the DFTCN algorithm, a robotic hand with 15 DoFs is used to validate the trajectory tracking performance, as shown in Figure 3. The global coordinate frame { O } is established in the center of the base of the hand. The local coordinate frames, denoted as { O 1 } , …, { O 14 } , are established at each joint of the robotic hand. Each frame is used to describe its respective rotational DoF. Each frame possesses a rolling DoF, while { O 1 } additionally features a pitching DoF. In the global coordinate frame, the red dot at the tip of each finger represents the end-effector of the corresponding finger.
Relevant configuration details and desired trajectories for the simulations are organized below. The key parameters of the numerical simulations in this study are summarized in Table 1. The desired trajectories of the end-effectors are represented by functions of time t, as shown in Table 2.
Figure 4 shows five sets of subfigures, each corresponding to a finger of the robotic hand: (a) the index finger, (b) the middle finger, (c) the ring finger, (d) the little finger, and (e) the thumb. Each set consists of three individual subfigures. Specifically, the leftmost figure illustrates the desired trajectories and the actual trajectories of the end-effector. The middle figure depicts the position error of the end-effector. Here, ε x DFTCN , ε y DFTCN , ε z DFTCN , ε x ETDN , ε y ETDN , and ε z ETDN denote the position error of the end-effector along the x, y, and z coordinate directions using the DFTCN algorithm and the ETDN algorithm, respectively. The rightmost figure presents the e k + 1 2 curves. The left subfigure shows that the actual trajectories generated by the DFTCN algorithm more closely approximate the desired trajectories. The middle subfigure presents the numerical values of the position errors, which more explicitly illustrate the accuracy of the DFTCN algorithm. The e k + 1 2 curves in the right subfigure indicate that the DFTCN algorithm achieves an accuracy that is three orders of magnitude higher in addressing the TVQP problem. This result may appear inconsistent with the position error depicted in the middle subfigure. However, this discrepancy is attributed to the nonlinear relationship between x k + 1 and the actual trajectories of the end-effector. Based on the above analysis, it is evident that the DFTCN algorithm exhibits favorable convergence and high accuracy in tracking the trajectory of the end-effector during the robotic hand grasping motion. Figure 5 shows the results with τ = 0.02 s . These results indicate that the DFTCN algorithm maintains a high level of accuracy and exhibits favorable convergence characteristics even though τ increases from 0.01 s to 0.02 s.
To enable a comprehensive comparison of algorithm performance, this study additionally includes the DIZN [48] and GAGZNS [49] algorithms. The evaluation covers convergence time, convergence rate, RMSE, MAE, and per-iteration computation time. Table 3 and Table 4 present the results for the four algorithms at time steps τ = 0.01 s and τ = 0.02 s , respectively. The results show that the DFTCN algorithm outperforms the others on key metrics, including convergence time, RMSE, and MAE. Although DFTCN’s convergence time increases with τ , it remains substantially faster than the other methods. These findings offer practical guidance for selecting algorithms under different time-step scenarios.
Meanwhile, to further verify the robustness of the DFTCN algorithm under non-ideal operating conditions, this study establishes a noise interference performance comparison. Specifically, Gaussian white noise with zero mean and a variance of 2.5 × 10 7 m 2 is introduced to simulate camera error interference. The noise-free operating condition is taken as the ideal performance benchmark. The average tracking error is selected as the quantitative indicator. The data are collected within a 5-s cycle to analyze the variation of errors. As shown in Figure 6, the variation of the relevant error curve indicates several trends. Although the average tracking error of the DFTCN algorithm increases slightly, the error curve consistently decays over time and eventually converges stably to a low level, even with noise interference. The above result directly confirms the robustness of the DFTCN algorithm against low-to-moderate intensity camera noise, providing experimental support for its practical deployment.
In summary, the simulations cover both ideal conditions and non-ideal conditions. The sub-millimeter accuracy achieved under ideal conditions illustrates the DFTCN algorithm’s inherent performance and stable convergence properties. Furthermore, the consistent error attenuation trend and controllable performance degradation observed under non-ideal conditions further confirm its robustness against practical disturbances. These results help bridge the gap between theoretical design and real-world application requirements.

3.2. Real-Robot Experiments

The numerical simulation results have verified that the DFTCN algorithm exhibits favorable theoretical performance. To further confirm the practical performance of the DFTCN algorithm in achieving precise grasping, a grasping experiment is conducted. This experiment uses a collaborative system consisting of the robotic arm and the robotic hand. The experimental setup is shown in Figure 7.
In the real-robot experiments, detailed information is provided, covering both hardware specification parameters and key algorithm parameters. Such information is summarized and presented in Table 5. The experimental procedure consists of the following steps. Initially, the robotic hand is manually guided to grasp the tissue box, and the trajectories of the end-effector are recorded as the desired trajectories. Subsequently, the camera is used to collect the point cloud data of the grasped object, which is used to generate the corresponding grasping posture. After coordinate transformation, the actual trajectories of the end-effector are obtained.
This experiment aims to evaluate the trajectory tracking and precise grasping ability of the robotic hand when grasping objects similar to tissue boxes, as shown in Figure 8. The study compares the performance of two algorithms in grasping the object under different sampling gaps. In this study, the DFTCN algorithm runs at τ = 0.02 s , while the ETDN algorithm runs at τ = 0.01 s . The two algorithms respectively drive the robotic hand to complete continuous tasks, including grasping, lifting, and removing objects from the surface. The grasping performance of the robotic hand is validated by observing whether the grasped objects slide or fall.
Additionally, camera noise (Gaussian white noise with zero mean and a variance of 2.5 × 10 7 m 2 ) and external disturbances are introduced in the experiments. Such factors are used to further validate the robustness of the DFTCN algorithm. As shown in Figure 9, the robotic hand moves toward the target object. After reaching the pre-grasp position, the object is manually moved to a new position due to a sudden external disturbance. When subjected to this disturbance, the DFTCN algorithm does not fail. Instead, it quickly tracks the object’s new position and ultimately completes the grasping task successfully. This experiment illustrates the algorithm’s robustness against uncertainties.
The experiments illustrate that the DFTCN algorithm can still achieve precise grasping and trajectory tracking of objects, such as tissue boxes, even at a longer sampling gap. In contrast, the ETDN algorithm, despite operating at a shorter sampling gap, fails to complete the grasping task under these real robotic deployments. In addition, camera noise and external disturbances are introduced in this experiment. Despite these challenges, the robotic hand still completed the grasping task. The results indicate that the proposed algorithm illustrates commendable performance in grasping experiments.

4. Conclusions

In this study, the DFTCN algorithm is proposed to address the trajectory tracking problem of a robotic hand with high DoFs. This approach reformulates the grasping task as a TVQP problem. It integrates the joint constraints through the LSE function and derives the control law using a three-step Taylor-type discretization method. Theoretical analysis proved the stability, finite-time convergence, and accuracy (with the maximal steady-state residual error of O ( τ 3 ) ) of the proposed algorithm. Numerical simulations show that the DFTCN algorithm achieves sub-millimeter accuracy, with RMSE of 1.399 × 10 4 m. It converges within 0.740 s at a sampling gap of τ = 0.01 s and maintains robust performance under Gaussian white noise. Real-robot experiments verify that the DFTCN algorithm achieves precise grasping even at τ = 0.02 s. The robotic hand, equipped with the DFTCN algorithm, also accomplishes grasping tasks even in the presence of camera noise and external disturbances. Furthermore, the framework can be extended to dynamic grasping by incorporating real-time desired trajectories from sensory feedback. It can also be adapted for tactile feedback control by including a force deviation term in the optimization. This enables coordinated control of both position and grasping force. Our future work will implement these extensions in more complex manipulation tasks. We will also aim to further reduce computational complexity to enhance real-time performance.

Author Contributions

Conceptualization, J.L., Y.Z., H.C. and Y.X.; methodology, H.C. and J.L.; software, Y.X. and H.L.; validation, H.C. and Y.X.; formal analysis, H.C. and Y.X.; investigation, H.C. and H.L.; resources, J.L. and Y.H.; data curation, Y.H. and H.L.; writing—original draft preparation, H.C. and Y.X.; writing—review and editing, J.L., Y.Z., H.C., Y.X. and H.L.; visualization, H.L.; supervision, J.L., Y.Z. and Y.H.; project administration, J.L. and Y.H.; funding acquisition, J.L. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China (No. 51905251 and No. 62376290), the Shenzhen Major Science and Technology Program under Grant 202402004, the Guangdong Provincial Special Funds for Promoting High Quality Economic Development (Marine Economic Development) in Six Major Marine Industries (No. GDNRC[2024]52) and the Natural Science Foundation of Guangdong Province (No. 2024A1515011016).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhong, Y.; Jiang, Q.; Yu, J.; Ma, Y. Dexgrasp anything: Towards universal robotic dexterous grasping with physics awareness. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 10–17 June 2025; pp. 22584–22594. [Google Scholar]
  2. Weng, Z.; Lu, H.; Kragic, D.; Lundell, J. Dexdiffuser: Generating dexterous grasps with diffusion models. IEEE Robot. Autom. Lett. 2024, 9, 11834–11840. [Google Scholar] [CrossRef]
  3. Reader, A.T.; Gaile, L.; Li, W.; Cheah Mc Corry, E.E.; Mackie, K. Multiple object handling: Exploring strategies for cumulative grasping and transport using a single hand. Exp. Brain Res. 2025, 243, 165. [Google Scholar] [CrossRef]
  4. Zhou, J.; Huang, J.; Dou, Q.; Abbeel, P.; Liu, Y. A dexterous and compliant (dexco) hand based on soft hydraulic actuation for human inspired fine in-hand manipulation. IEEE Trans. Robot. 2025, 41, 666–686. [Google Scholar] [CrossRef]
  5. Hu, W.; Huang, B.; Lee, W.W.; Yang, S.; Zheng, Y.; Li, Z. Dexterous in-hand manipulation of slender cylindrical objects through deep reinforcement learning with tactile sensing. Robot. Auton. Syst. 2025, 186, 104904. [Google Scholar] [CrossRef]
  6. Liang, H.; Cong, L.; Hendrich, N.; Li, S.; Sun, F.; Zhang, J. Multifingered grasping based on multimodal reinforcement learning. IEEE Robot. Autom. Lett. 2021, 7, 1174–1181. [Google Scholar] [CrossRef]
  7. Charlesworth, H.J.; Montana, G. Solving challenging dexterous manipulation tasks with trajectory optimisation and reinforcement learning. In Proceedings of the 38th International Conference on Machine Learning (ICML), Virtual, 18–24 July 2021; pp. 1496–1506. [Google Scholar]
  8. Mandikal, P.; Grauman, K. Dexvip: Learning dexterous grasping with human hand pose priors from video. In Proceedings of the Conference on Robot Learning, Auckland, New Zealand, 14–18 December 2022; pp. 651–661. [Google Scholar]
  9. Kim, H.; Ohmura, Y.; Kuniyoshi, Y. Gaze-based dual resolution deep imitation learning for high-precision dexterous robot manipulation. IEEE Robot. Autom. Lett. 2021, 6, 1630–1637. [Google Scholar] [CrossRef]
  10. Chi, C.; Xu, Z.; Feng, S.; Cousineau, E.; Du, Y.; Burchfiel, B.; Tedrake, R.; Song, S. Diffusion policy: Visuomotor policy learning via action diffusion. Int. J. Robot. Res. 2025, 44, 1684–1704. [Google Scholar] [CrossRef]
  11. Yang, L.; Huang, B.; Li, Q.; Tsai, Y.Y.; Lee, W.W.; Song, C.; Pan, J. Tacgnn: Learning tactile-based in-hand manipulation with a blind robot using hierarchical graph neural network. IEEE Robot. Autom. Lett. 2023, 8, 3605–3612. [Google Scholar] [CrossRef]
  12. Valencia, D.; Jia, J.; Li, R.; Hayashi, A.; Lecchi, M.; Terezakis, R.; Gee, T.; Liarokapis, M.V.; MacDonald, B.A.; Williams, H. Comparison of model-based and model-free reinforcement learning for real-world dexterous robotic manipulation tasks. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 871–878. [Google Scholar]
  13. Wang, Q.; Sanchez, F.R.; McCarthy, R.; Bulens, D.C.; McGuinness, K.; O’Connor, N.; Wüthrich, M.; Widmaier, F.; Bauer, S.; Redmond, S.J. Dexterous robotic manipulation using deep reinforcement learning and knowledge transfer for complex sparse reward-based tasks. Expert Syst. 2023, 40, e13205. [Google Scholar] [CrossRef]
  14. Duan, H.; Wang, P.; Huang, Y.; Xu, G.; Wei, W.; Shen, X. Robotics dexterous grasping: The methods based on point cloud and deep learning. Front. Neurorobotics 2021, 15, 658280. [Google Scholar] [CrossRef]
  15. Jin, B.; Ye, S.; Su, J.; Luo, J. Unknown payload adaptive control for quadruped locomotion with proprioceptive linear legs. IEEE/ASME Trans. Mechatronics 2022, 27, 1891–1899. [Google Scholar] [CrossRef]
  16. Luo, J.; Ye, S.; Su, J.; Jin, B. Prismatic quasi-direct-drives for dynamic quadruped locomotion with high payload capacity. Int. J. Mech. Sci. 2022, 235, 107698. [Google Scholar] [CrossRef]
  17. Peng, J.; Zhou, W.; Han, Y.; Li, M.; Liu, W. Deep learning-based autonomous real-time digital meter reading recognition method for natural scenes. Measurement 2023, 222, 113615. [Google Scholar] [CrossRef]
  18. Luo, J.; Gong, Z.; Su, Y.; Ruan, L.; Zhao, Y.; Asada, H.H.; Fu, C. Modeling and balance control of supernumerary robotic limb for overhead tasks. IEEE Robot. Autom. Lett. 2021, 6, 4125–4132. [Google Scholar] [CrossRef]
  19. Luo, J.; Zhao, Y.; Ruan, L.; Mao, S.; Fu, C. Estimation of CoM and CoP trajectories during human walking based on a wearable visual odometry device. IEEE Trans. Autom. Sci. Eng. 2020, 19, 396–409. [Google Scholar] [CrossRef]
  20. Peng, J.; Zhang, C.; Kang, L.; Feng, G. Endoscope FOV autonomous tracking method for robot-assisted surgery considering pose control, hand–eye coordination, and image definition. IEEE Trans. Instrum. Meas. 2022, 71, 1–16. [Google Scholar] [CrossRef]
  21. Cheng, X.; Patil, S.; Temel, Z.; Kroemer, O.; Mason, M.T. Enhancing dexterity in robotic manipulation via hierarchical contact exploration. IEEE Robot. Autom. Lett. 2023, 9, 390–397. [Google Scholar] [CrossRef]
  22. Ma, X.; Zhang, J.; Wang, B.; Huang, J.; Bao, G. Continuous adaptive gaits manipulation for three-fingered robotic hands via bioinspired fingertip contact events. Biomim. Intell. Robot. 2024, 4, 100144. [Google Scholar] [CrossRef]
  23. Ma, J.; Tian, Q.; Liu, K.; Liu, J.; Guo, S. Fully tactile dexterous hand grasping strategy combining visual and tactile senses. In Proceedings of the International Conference on Intelligent Robotics and Applications, Hangzhou, China, 5–7 July 2023; pp. 133–143. [Google Scholar]
  24. Li, S.L.; Zhang, A.; Chen, B.; Matusik, H.; Liu, C.; Rus, D.; Sitzmann, V. Controlling diverse robots by inferring Jacobian fields with deep networks. Nature 2025, 643, 89–95. [Google Scholar] [CrossRef]
  25. Altiner, B.; Saoud, A.; Caldas, A.; Makarov, M. Scenario convex programs for dexterous manipulation under modeling uncertainties. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Bari, Italy, 28 August–1 September 2024; pp. 3490–3497. [Google Scholar]
  26. Kim, S.; Ahn, T.; Lee, Y.; Kim, J.; Wang, M.Y.; Park, F.C. DSQNet: A deformable model-based supervised learning algorithm for grasping unknown occluded objects. IEEE Trans. Autom. Sci. Eng. 2022, 20, 1721–1734. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Jiang, D.; Wang, J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef]
  28. Sathya, A.S.; Carpentier, J. Constrained articulated body dynamics algorithms. IEEE Trans. Robot. 2025, 41, 430–449. [Google Scholar] [CrossRef]
  29. Qiao, Y.L.; Liang, J.; Koltun, V.; Lin, M.C. Efficient differentiable simulation of articulated bodies. In Proceedings of the 38th International Conference on Machine Learning (ICML), Virtual, 18–24 July 2021; pp. 8661–8671. [Google Scholar]
  30. Zhou, X.; Liao, B. Advances in Zeroing Neural Networks: Convergence Optimization and Robustness in Dynamic Systems. Mathematics 2025, 13, 1801. [Google Scholar] [CrossRef]
  31. Cui, Y.; Liu, K. Adaptive Fuzzy Fault-Tolerant Formation Control of High-Order Fully Actuated Multi-Agent Systems with Time-Varying Delays. Mathematics 2025, 13, 2813. [Google Scholar] [CrossRef]
  32. Stanimirović, P.S.; Ćirić, M.; Mourtas, S.D.; Brzaković, P.; Karabašević, D. Simulations and bisimulations between weighted finite automata based on time-varying models over real numbers. Mathematics 2024, 12, 2110. [Google Scholar] [CrossRef]
  33. Xiao, L.; Jia, L.; Wang, Y.; Dai, J.; Liao, Q.; Zhu, Q. Performance analysis and applications of finite-time ZNN models with constant/fuzzy parameters for TVQPEI. IEEE Trans. Neural Networks Learn. Syst. 2021, 33, 6665–6676. [Google Scholar] [CrossRef] [PubMed]
  34. Hu, Z.; Xiao, L.; Dai, J.; Xu, Y.; Zuo, Q.; Liu, C. A unified predefined-time convergent and robust ZNN model for constrained quadratic programming. IEEE Trans. Ind. Inform. 2020, 17, 1998–2010. [Google Scholar] [CrossRef]
  35. Li, C.; Zhang, H.; Chen, K.; Yang, M. Novel Noise-Tolerant Zeroing Neurodynamics Algorithms for Dynamic Nonlinear Least Square Problems with Robot Application. IEEE Trans. Ind. Electron. 2025. early access. [Google Scholar] [CrossRef]
  36. Liu, Y.; Liu, K.; Wang, G.; Sun, Z.; Jin, L. Noise-tolerant zeroing neurodynamic algorithm for upper limb motion intention-based human–robot interaction control in non-ideal conditions. Expert Syst. Appl. 2023, 213, 118891. [Google Scholar] [CrossRef]
  37. Qiu, B.; Li, X.D.; Yang, S. A novel discrete-time neurodynamic algorithm for future constrained quadratic programming with wheeled mobile robot control. Neural Comput. Appl. 2023, 35, 2795–2809. [Google Scholar] [CrossRef]
  38. Abbassi, R.; Jerbi, H.; Kchaou, M.; Simos, T.E.; Mourtas, S.D.; Katsikis, V.N. Towards higher-order zeroing neural networks for calculating quaternion matrix inverse with application to robotic motion tracking. Mathematics 2023, 11, 2756. [Google Scholar] [CrossRef]
  39. Jin, J.; Zhu, J.; Gong, J.; Chen, W. Novel activation functions-based ZNN models for fixed-time solving dynamirc Sylvester equation. Neural Comput. Appl. 2022, 34, 14297–14315. [Google Scholar] [CrossRef]
  40. Chen, J.; Kang, X.; Zhang, Y. Continuous and discrete ZND models with aid of eleven instants for complex QR decomposition of time-varying matrices. Mathematics 2023, 11, 3354. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Li, Z.; Yang, M.; Ming, L.; Guo, J. Jerk-level Zhang neurodynamics equivalency of bound constraints, equation constraints, and objective indices for cyclic motion of robot-arm systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 3005–3018. [Google Scholar] [CrossRef] [PubMed]
  42. Wu, Y.; Zhu, Z.; Liu, F.; Chrysos, G.; Cevher, V. Extrapolation and spectral bias of neural nets with hadamard product: A polynomial net study. Adv. Neural Inf. Process. Syst. 2022, 35, 26980–26993. [Google Scholar]
  43. Zhang, Y.; Ge, S.S. Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Netw. 2005, 16, 1477–1490. [Google Scholar] [CrossRef]
  44. Bhat, S.P.; Bernstein, D.S. Finite-time stability of continuous autonomous systems. SIAM J. Control Optim. 2000, 38, 751–766. [Google Scholar] [CrossRef]
  45. Guo, D.; Nie, Z.; Yan, L. Theoretical analysis, numerical verification and geometrical representation of new three-step DTZD algorithm for time-varying nonlinear equations solving. Neurocomputing 2016, 214, 516–526. [Google Scholar] [CrossRef]
  46. Yu, F.; Liu, L.; Xiao, L.; Li, K.; Cai, S. A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 2019, 350, 108–116. [Google Scholar] [CrossRef]
  47. Biswas, B.N.; Chatterjee, S.; Mukherjee, S.P. A Discussion on Euler Method: A Review. Electron. J. Math. Anal. Appl. 2013, 1, 209–219. [Google Scholar] [CrossRef]
  48. Yang, M.; Yu, P.; Tan, N. Discrete integral-type zeroing neurodynamics for robust inverse-free and model-free motion control of redundant manipulators. Comput. Electr. Eng. 2024, 118, 109344. [Google Scholar] [CrossRef]
  49. Tan, Z.; Zhang, Y. Finite-time convergent gradient-zeroing neurodynamic system for solving temporally-variant linear simultaneous equation. Appl. Soft Comput. 2025, 170, 112695. [Google Scholar] [CrossRef]
Figure 1. Framework of DFTCN algorithm for trajectory tracking of robotic hand. (a) Formulation of the TVQP problem for the robotic hand. (b) Design of the control law for the DFTCN algorithm. (c) Discretization of the DFTCN algorithm using the three-step Taylor-type method. (d) Process from object point cloud acquisition to grasping task execution in real-robot implementation.
Figure 1. Framework of DFTCN algorithm for trajectory tracking of robotic hand. (a) Formulation of the TVQP problem for the robotic hand. (b) Design of the control law for the DFTCN algorithm. (c) Discretization of the DFTCN algorithm using the three-step Taylor-type method. (d) Process from object point cloud acquisition to grasping task execution in real-robot implementation.
Mathematics 13 03823 g001
Figure 2. Comparison between nonsmooth max/min function and smooth LSE function for optimization constraints. Smooth continuously differentiable LSE surrogate for original nonsmooth max/min operator. Lower-right inset magnified view of fine-scale behavior near the origin within x [ 0.4 , 0.4 ] .
Figure 2. Comparison between nonsmooth max/min function and smooth LSE function for optimization constraints. Smooth continuously differentiable LSE surrogate for original nonsmooth max/min operator. Lower-right inset magnified view of fine-scale behavior near the origin within x [ 0.4 , 0.4 ] .
Mathematics 13 03823 g002
Figure 3. Configuration of robotic hand in global coordinate frame. Fingers including thumb, index finger, middle finger, ring finger, and little finger. Red dots at fingertips as end-effectors. Thumb joints with pitch and roll motions, other fingers with roll motion only. Total of 15 DoFs for precise grasping.
Figure 3. Configuration of robotic hand in global coordinate frame. Fingers including thumb, index finger, middle finger, ring finger, and little finger. Red dots at fingertips as end-effectors. Thumb joints with pitch and roll motions, other fingers with roll motion only. Total of 15 DoFs for precise grasping.
Mathematics 13 03823 g003
Figure 4. Performance evaluation of DFTCN and ETDN algorithms in robotic hand’s trajectory tracking at τ = 0.01 s . Subfigures (ae): index finger, middle finger, ring finger, little finger, and thumb. 3D trajectory comparison with detail view, position error ε convergence curve, and tracking error norm | | e k + 1 | | 2 curve in each subfigure.
Figure 4. Performance evaluation of DFTCN and ETDN algorithms in robotic hand’s trajectory tracking at τ = 0.01 s . Subfigures (ae): index finger, middle finger, ring finger, little finger, and thumb. 3D trajectory comparison with detail view, position error ε convergence curve, and tracking error norm | | e k + 1 | | 2 curve in each subfigure.
Mathematics 13 03823 g004
Figure 5. Performance evaluation of DFTCN and ETDN algorithms in robotic hand’s trajectory tracking at τ = 0.02 s . Subfigures (ae): index finger, middle finger, ring finger, little finger, and thumb. 3D trajectory comparison with detail view, position error ε convergence curve, and tracking error norm | | e k + 1 | | 2 curve in each subfigure.
Figure 5. Performance evaluation of DFTCN and ETDN algorithms in robotic hand’s trajectory tracking at τ = 0.02 s . Subfigures (ae): index finger, middle finger, ring finger, little finger, and thumb. 3D trajectory comparison with detail view, position error ε convergence curve, and tracking error norm | | e k + 1 | | 2 curve in each subfigure.
Mathematics 13 03823 g005
Figure 6. Robustness of DFTCN under camera noise at different sampling gaps τ . (a) Results for τ = 0.01 s , rapid rise and gradual decline of both curves, increased tracking error, and slower convergence speed due to noise. (b) Results for τ = 0.02 s , higher peak average tracking error with noise, higher overall error level than τ = 0.01 s , reduced robustness to noise at larger τ .
Figure 6. Robustness of DFTCN under camera noise at different sampling gaps τ . (a) Results for τ = 0.01 s , rapid rise and gradual decline of both curves, increased tracking error, and slower convergence speed due to noise. (b) Results for τ = 0.02 s , higher peak average tracking error with noise, higher overall error level than τ = 0.01 s , reduced robustness to noise at larger τ .
Mathematics 13 03823 g006
Figure 7. Setup of real-robot experiment. (a) Hardware setup of robotic grasping experiment, blue background, top camera, robotic arm with hand, black operation table with containers, and target object. (b) Grasping visualization based on object point cloud, red grasping posture for desired trajectories, and grasping pose generation process.
Figure 7. Setup of real-robot experiment. (a) Hardware setup of robotic grasping experiment, blue background, top camera, robotic arm with hand, black operation table with containers, and target object. (b) Grasping visualization based on object point cloud, red grasping posture for desired trajectories, and grasping pose generation process.
Mathematics 13 03823 g007
Figure 8. Comparison of grasping performance of robotic hand using DFTCN and ETDN algorithms at different time points. (a) DFTCN algorithm, initialization at t = 0 s , grasping action at t = 4 s , successful task completion at t = 7 s . (b) ETDN algorithm, initialization at t = 0 s , grasping action at t = 4 s , task failure at t = 7 s .
Figure 8. Comparison of grasping performance of robotic hand using DFTCN and ETDN algorithms at different time points. (a) DFTCN algorithm, initialization at t = 0 s , grasping action at t = 4 s , successful task completion at t = 7 s . (b) ETDN algorithm, initialization at t = 0 s , grasping action at t = 4 s , task failure at t = 7 s .
Mathematics 13 03823 g008
Figure 9. Six-stage process of grasping task under noisy conditions. (a) At t = 0 s , initial approach of manipulator to object. (b) Object repositioning to new location at t = 2 s . (c) During t = 5 s , achievement of pre-grasp pose. (d) Object displacement and grasp-point offset at (t = 7 s). (e) Detection of offset and rapid trajectory replanning by DFTCN algorithm at t = 9 s . (f) Successful completion of grasping task at t = 12 s .
Figure 9. Six-stage process of grasping task under noisy conditions. (a) At t = 0 s , initial approach of manipulator to object. (b) Object repositioning to new location at t = 2 s . (c) During t = 5 s , achievement of pre-grasp pose. (d) Object displacement and grasp-point offset at (t = 7 s). (e) Detection of offset and rapid trajectory replanning by DFTCN algorithm at t = 9 s . (f) Successful completion of grasping task at t = 12 s .
Mathematics 13 03823 g009
Table 1. Simulation Configuration and Parameters.
Table 1. Simulation Configuration and Parameters.
AlgorithmDoFsSimulation Hardware
Specifications
Software
Platform
Sampling
Gap (s)
Time
(s)
DFTCN151. Intel Core i7-12700KF;
2. NVIDIA GeForce RTX 4070 SUPER.
MATLAB
R2023a
0.01,
0.02
5
ETDN15Same as DFTCNMATLAB
R2023a
0.01,
0.02
5
Table 2. Desired Trajectories for Grasping by End-effector with Coordinates P x ( t ) , P y ( t ) , and P z ( t ) Denoting Time-dependent Positions along x, y, and z Axes.
Table 2. Desired Trajectories for Grasping by End-effector with Coordinates P x ( t ) , P y ( t ) , and P z ( t ) Denoting Time-dependent Positions along x, y, and z Axes.
Finger P x ( t ) / 10 3 m P y ( t ) / 10 3 m P z ( t ) / 10 3 m
Index finger 0.424 t 2 2.46 t + 42.4 58.0 0.52 t 2 + 6.98 t + 10.0
Middle finger 0.64 t 2 4.12 t + 54.3 40.0 0.904 t 2 + 10.58 t + 15.5
Ring finger 0.496 t 2 4.92 t + 44.8 22.0 0.888 t 2 + 9.18 t + 20.9
Little finger 0.04 t 2 5.54 t + 24.3 4.0 0.96 t 2 + 9.18 t + 13.8
Thumb 0.072 t 2 + 1.36 t 41.8 62.0 0.04 t 2 0.86 t + 57.5
Table 3. Performance Comparison of Algorithms under τ = 0.01 s .
Table 3. Performance Comparison of Algorithms under τ = 0.01 s .
AlgorithmConvergence
Time (s)
Convergence
Rate
RMSE (m)MAE (m)Per-Iteration
Time (s)
DFTCN0.7400.01771.399 ×   10 4 6.892 ×   10 6 1.108 ×   10 3
ETDN3.2200.00612.940 ×   10 4 2.120 ×   10 4 1.304 ×   10 3
GAGZNS2.0480.01475.360 ×   10 4 4.520 ×   10 4 8.420 ×   10 4
DZN2.4100.00959.020 ×   10 4 4.070 ×   10 4 4.300 ×   10 4
Table 4. Performance Comparison of Algorithms under τ = 0.02 s .
Table 4. Performance Comparison of Algorithms under τ = 0.02 s .
AlgorithmConvergence
Time (s)
Convergence
Rate
RMSE (m)MAE (m)Per-Iteration
Time (s)
DFTCN1.2900.03532.650 ×   10 4 1.360 ×   10 4 1.300 ×   10 3
ETDN4.3200.01335.580 ×   10 4 4.170 ×   10 4 1.380 ×   10 3
GAGZNS2.0500.01475.360 ×   10 4 4.520 ×   10 4 9.100 ×   10 4
DZN1.6300.00866.290 ×   10 4 4.170 ×   10 4 4.600 ×   10 4
Table 5. Experimental Setup and Specifications.
Table 5. Experimental Setup and Specifications.
AlgorithmDoFsHardware SpecificationsSampling
Gap (s)
Time
(s)
DFTCN121. Han’s E03 robot;
2. Inspire hand;
3. Intel Core i7-12700KF;
4. NVIDIA GeForce RTX 4070 SUPER;
5. Intel RealSense D435if.
0.025
ETDN12Same as DFTCN0.015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Xin, Y.; Li, H.; Han, Y.; Zhang, Y.; Luo, J. Discrete Finite-Time Convergent Neurodynamics Approach for Precise Grasping of Multi-Finger Robotic Hand. Mathematics 2025, 13, 3823. https://doi.org/10.3390/math13233823

AMA Style

Chen H, Xin Y, Li H, Han Y, Zhang Y, Luo J. Discrete Finite-Time Convergent Neurodynamics Approach for Precise Grasping of Multi-Finger Robotic Hand. Mathematics. 2025; 13(23):3823. https://doi.org/10.3390/math13233823

Chicago/Turabian Style

Chen, Haotang, Yuefeng Xin, Haolin Li, Yu Han, Yunong Zhang, and Jianwen Luo. 2025. "Discrete Finite-Time Convergent Neurodynamics Approach for Precise Grasping of Multi-Finger Robotic Hand" Mathematics 13, no. 23: 3823. https://doi.org/10.3390/math13233823

APA Style

Chen, H., Xin, Y., Li, H., Han, Y., Zhang, Y., & Luo, J. (2025). Discrete Finite-Time Convergent Neurodynamics Approach for Precise Grasping of Multi-Finger Robotic Hand. Mathematics, 13(23), 3823. https://doi.org/10.3390/math13233823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop