Abstract
The multi-finger robotic hand exhibits significant potential in grasping tasks owing to its high degrees of freedom (DoFs). Object grasping results in a closed-chain kinematic system between the hand and the object. This increases the dimensionality of trajectory tracking and substantially raises the computational complexity of traditional methods. Therefore, this study proposes the discrete finite-time convergent neurodynamics (DFTCN) algorithm to address the aforementioned issue. Specifically, a time-varying quadratic programming (TVQP) problem is formulated for each finger, incorporating joint angle and angular velocity constraints through log-sum-exp (LSE) functions. The TVQP problem is then transformed into a time-varying equation system (TVES) problem using the Karush–Kuhn–Tucker (KKT) conditions. A novel control law is designed, employing a three-step Taylor-type discretization for efficient implementation. Theoretical analysis verifies the algorithm’s stability and finite-time convergence property, with the maximum steady-state residual error being . Numerical simulations illustrate the favorable convergence and high accuracy of the DFTCN algorithm compared with three existing dominant neurodynamic algorithms. The real-robot experiments further confirm its capability for precise grasping, even in the presence of camera noise and external disturbances.