Next Article in Journal
DCMFF-Net: A Low-Complexity Intra-Frame Encoding Method with Double Convolution and Multi-Scale Feature Fusion
Previous Article in Journal
ASFNOformer—A Superior Frequency Domain Token Mixer in Spiking Transformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model

1
Jiangsu Product Quality Testing and Inspection Institute, Nanjing 210007, China
2
School of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China
3
Tianfu College, Southwest University of Finance and Economics, Mianyang 621000, China
4
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(24), 4862; https://doi.org/10.3390/electronics14244862
Submission received: 4 November 2025 / Revised: 21 November 2025 / Accepted: 4 December 2025 / Published: 10 December 2025
(This article belongs to the Section Industrial Electronics)

Abstract

Human–robot cooperative tasks require physical human–robot interaction (pHRI) systems that can adapt to individual human behaviors while ensuring robustness and stability. This paper presents a dual-loop control framework combining an admittance outer loop and a neural adaptive inner loop based on the Robust Integral of the Sign of the Error (RISE) approach. The outer loop reshapes the manipulator trajectory according to interaction forces, ensuring compliant motion and user safety. The inner-loop Adaptive RISE–RBFNN controller compensates for unknown nonlinear dynamics and bounded disturbances through online neural learning and robust sign-based correction, guaranteeing semi-global asymptotic convergence. Quantitative results demonstrate that the proposed adaptive RISE controller with neural-network error compensation (ARINNSE) achieves superior performance in the Joint-1 tracking task, reducing the root-mean-square tracking error by approximately 51.7% and 42.3% compared to conventional sliding mode control and standard RISE methods, respectively, while attaining the smallest maximum absolute error and maintaining control energy consumption comparable to that of RISE. Under human–robot interaction scenarios, the controller preserves stable, bounded control inputs and rapid error convergence even under time-varying disturbances. These results confirm that the proposed admittance-based RISE–RBFNN framework provides enhanced robustness, adaptability, and compliance, making it a promising approach for safe and efficient human–robot collaboration.

1. Introduction

The increasing demand for service robots in home and industrial workspaces has spurred the development of highly capable robots equipped with advanced proprioception sensing and actuation control. These systems, ranging from robotic manipulators to full humanoids, are designed to assist humans in various tasks, many of which require collaborative efforts to ensure safe, efficient, and effective execution. Over the past two decades, the integration of robotic systems into collaborative scenarios has been the focus of extensive and rapidly growing research efforts [1,2,3,4].
Among the many human–robot collaborative tasks, one of the most important and challenging issues is physical human–robot interaction (pHRI). In pHRI, the robot physically interacts with a human partner and must interpret and respond to uncertain, time-varying forces and motions [5]. Such uncertainty arises from unpredictable human intentions, complex environmental contacts, and unmodeled robot dynamics. Consequently, ensuring compliant, stable, and safe interaction under these uncertain conditions becomes a core research problem in pHRI [6].
In a typical pHRI setup, humans and robots collaborate within a shared workspace to achieve a common task. The robot may serve as a rehabilitation assistant or industrial manipulator, and the human operator guides its end-effector through interactive forces. Because the coupled dynamics of the human, robot, and environment are uncertain, mismatched motion planning or excessive contact forces can lead to safety issues and discomfort. Therefore, a key challenge is to control the manipulator such that it obeys the operator’s intention within the task space, despite uncertain and dynamic interaction conditions [7,8].
To ensure compliant and safe cooperation under such uncertainties, force feedback has been widely applied in pHRI. It enables haptic perception and allows the robot to maintain prescribed contact forces autonomously [9,10]. However, ensuring that no overshoot or oscillation occurs when interacting with time-varying environments remains a challenge, especially when the environment model and human intention are unknown. Therefore, the fundamental problem in compliant force control is to design a control law that guarantees stability and responsiveness despite model uncertainties and external disturbances.
To handle these challenges, numerous compliant control strategies have been proposed. Hybrid position/force control [11,12] and impedance control [13] represent foundational frameworks. Impedance control defines the dynamic relationship between interaction force and position, eliminating the need for switching between contact and non-contact states, thus enhancing robustness. Admittance control, the dual of impedance, modifies reference motion according to measured interaction forces. These methods fundamentally aim to manage interaction uncertainty by shaping the robot’s dynamic response to unknown external forces, which is why impedance/admittance schemes have become mainstream in pHRI [14,15,16,17].
In generalized admittance control systems, a dual-loop structure is often used: the outer loop generates a reference trajectory according to the admittance model, while the inner loop ensures trajectory tracking. This structure decomposes the compliance problem into trajectory generation and motion compensation [18]. In essence, motion compensation control for manipulators is a practical approach to counteract the uncertainties introduced by human interaction and environmental variations. However, since accurate dynamic models are rarely available in practice, model uncertainties and external disturbances remain major obstacles. Adaptive control [19,20,21] and intelligent learning methods have thus been introduced to compensate these uncertainties online.
Neural-network (NN) techniques have become powerful tools for approximating nonlinear and uncertain dynamics [1,22,23,24,25]. For instance, refs. [26,27] applied adaptive NNs to flexible robots to approximate uncertain dynamics. Additionally, ref. [28] introduced adaptive fuzzy control with a modulation function. However, classical NN-based control methods typically guarantee only uniform ultimate boundedness due to reconstruction errors and external disturbances [29]. To enhance asymptotic performance, RBFNNs with robust terms have been developed [30,31], and RISE-based controllers have further improved smoothness and reduced chattering [32,33,34].
Despite these advances, most NN-based methods use desired trajectories as NN inputs and require large control gains to envelop uncertainty, which can amplify chattering and degrade comfort [35]. Moreover, traditional approaches often assume known bounds of disturbances, which are unrealistic in human-interaction scenarios. To address this, ref. [36] proposed an adaptive NN scheme enabling asymptotic tracking without prior knowledge of bounds. However, achieving semi-global asymptotic stability while maintaining continuous and bounded control signals under uncertain and time-varying pHRI conditions remains an open challenge.
Therefore, a clear research gap exists in developing a control framework that can simultaneously ensure continuity, boundedness, and asymptotic stability of the closed-loop system in the presence of unknown and dynamically changing interaction dynamics. Building on these insights, this paper proposes an admittance-based controller integrating the RISE (Robust Integral of the Sign of the Error) and RBFNN learning mechanisms to directly tackle interaction uncertainty in pHRI. The proposed approach enhances interaction compliance, compensates dynamic uncertainties, and guarantees semi-global asymptotic tracking without requiring prior knowledge of disturbance bounds.
It is also important to emphasize that the present work focuses on a simulation-based evaluation on a standard two-link manipulator model. Human–robot interaction is emulated through mathematically defined disturbance and force models rather than sensor-based measurements, and the cooperative task is restricted to a single reference-tracking scenario in a fixed environment. Consequently, the results presented here should be viewed as an initial validation of the proposed controller under controlled and idealized conditions rather than an exhaustive experimental study across different tasks, operators, or hardware setups.
The main contributions are summarized as follows:
  • A dual-loop admittance-based control framework integrating RISE and RBFNN is developed to achieve robust and compliant interaction under uncertain dynamics.
  • The proposed scheme achieves semi-global asymptotic tracking with bounded, continuous control inputs, without increasing control gain or requiring known disturbance bounds.

2. Preliminaries and Problem Formulation

2.1. Problem Formulation

In a typical physical human–robot interaction (pHRI) scenario, as illustrated in Figure 1, the human contacts the manipulator at the end-effector and exerts an interaction force F ext (red arrow in the left panel), to which the robot responds with joint actuation torques τ act to realize the human-intended motion. The task is carried out within a bounded workspace Ω (right panel) that defines the admissible region for safe operation; leaving Ω may signal a potential collision with the environment. We therefore aim to design a controller that renders the manipulator compliant to human motion, keeps interaction forces below a comfort threshold to avoid discomfort, and ensures safety by enforcing the workspace constraint and preventing collisions outside Ω . So, the objective of this study is to design a controller for the manipulator that achieves the following:
  • Design a controller for the robot that enables smooth adaptation to human interactions, facilitating trajectory reshaping using admittance control. This ensures that the robot responds compliantly to human movements, thereby enhancing safety and user comfort during physical interaction.
  • Develop a controller capable of robustly tracking trajectories generated by admittance control, even in the presence of model uncertainties. This includes uncertainties in the robot’s dynamics and the environment, ensuring reliable performance in real-world applications of pHRI.
These objectives aim to enhance the overall effectiveness and safety of human–robot collaboration by prioritizing interaction compliance and robust trajectory tracking. As shown in Figure 2, the impedance model receives the desired Cartesian pose X d and the measured pose from the kinematic block, together with the force error Δ F = F d F obtained by comparing the desired contact force F d with the force F measured by the sensor. From these inputs it generates a compliant cartesian reference X r . The inverse kinematic block then converts X r into joint–space references ( q r , q ˙ r , q ¨ r ) . In the inner loop, the neural adaptive RISE controller receives the actual joint states ( q , q ˙ ) , the references ( q r , q ˙ r , q ¨ r ) , and the estimated joint–space contact torque J T F (wrench–to–joint transformation) to produce the control input τ applied to the robot manipulator. This architecture preserves the decoupling between the outer impedance loop and the inner position loop, achieving force-compliant interaction while maintaining robust trajectory tracking.

2.2. Physical HRI Objective

The HRI should consider the compliant force interaction between the manipulator and the human. The manipulator is controlled by the designed controller which will be described latter.
The target impedance model can be described as
M d ( q ¨ d q ¨ r ) + C d ( q ˙ d q ˙ r ) + G d ( q d q r ) = τ f
where q d is the desired trajectory, and q r is the reference trajectory of the position control loop, and M d is the desired inertia matrix, C d is the desired damping matrix and G d is the desired stiffness matrix, above are the desired impedance model parameters.
In the proposed control scheme, two steps are included. For the first step, the desired HRI performance should be ensured with interaction environment dynamics. According to the admittance model, using the interaction forces on the end effort of the manipulator, we can obtain the generated manipulator motion trajectory from (1). In the second step, based on the developed robot driven manipulator, the position trajectory tracking is addressed by the designed adaptive controller to achieve q q r as t . Therefor, the impedance model is
M d ( q ¨ d q ¨ ) + C d ( q ˙ d q ˙ ) + G d ( q d q ) = τ f
Here, the reference trajectory q r is generated from (1), while the outer impedance loop does not interfere with the inner position-control loop. The proposed impedance design and trajectory-tracking controller will be detailed in the following sections. As shown in Figure 3, the manipulator (robot joint and links) is coupled to the environment through an interaction impedance at the end-effector, modeled as a virtual mass–spring–damper that shapes compliant motion. At the contact surface, this interaction impedance is in series with the environment impedance representing the human/ambient dynamics (e.g., tissue or tool stiffness and damping); the resulting contact force f ( x ) depends on the relative displacement/deformation x at the interface.

2.3. Dynamics Modeling of Manipulator System

In the joint space, the dynamics of an m-link manipulator system can be represented by the equation:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + G ( q ) + f d = τ τ f
Here, M ( q ) R m × m represents the inertia matrix, C ( q , q ˙ ) q ˙ R m represents the Coriolis and centripetal torques, G ( q ) R m represents the gravitational torques, f d R m represents external disturbance forces, q R m is the joint position vector, q ˙ R m is joint velocity vector, q ¨ R m is the joint acceleration vector, τ f R m denotes interactive torques from human or contact environments, and τ R m represents the control input to the manipulator system.
The forward kinematics function Ψ ( q ) relates joint angles q to the position of the end-effector x. Thus, the manipulator’s forward kinematics can expressed as x = Ψ ( q ) . Differentiating this with respect to time yields x ˙ = Γ ( q ) q ˙ , where Γ ( q ) R n × m is the Jacobian matrix of the manipulator. Using inverse kinematics, we can compute q ˙ and q ¨ as follows [37]:
q ˙ = Γ + ( q ) x ˙ q ¨ = Γ ˙ + ( q ) x ˙ + Γ + ( q ) x ¨
The vector x = [ x 1 , x 2 , , x n ] T represents the position of the end-effector in the operational space of manipulators, where n denotes the dimensionality of the end-effector coordinates. This research focuses on manipulators characterized by a well-defined forward kinematic function Ψ ( q ) and Jacobian matrix Γ ( q ) . The pseudoinverse of Γ ( q ) is denoted by Γ + ( q ) . The vectors x , x ˙ , and x ¨ R n correspond to the position, velocity, and acceleration in the task space, respectively.
Assumption 1.
The manipulator operates within a compact subset Q s R 2 of the joint space that does not contain kinematic singularities. For all q Q s , the Jacobian matrix Γ ( q ) has constant full row rank, and its Moore–Penrose pseudoinverse Γ + ( q ) exists, is unique, and is continuously differentiable. Moreover, Γ + ( q ) and Γ ˙ + ( q ) are bounded on Q s .
Remark 1.
This assumption is standard in operational-space control and avoids the technical difficulties associated with kinematic singularities and near-singular configurations. Since the present work focuses on the fundamental stability properties of the proposed ARINNSE scheme for a two-DOF manipulator in a regular workspace, singularity-robust inverse kinematics and pseudoinverse regularization are not considered and are left for future research.
By substituting (4) into (3), we get
M e ( q ) x ¨ + C e ( q , q ˙ ) x ˙ + G e ( q ) + P d = P P e
where M e ( q ) R n × n is the inertia matrix, C e ( q , q ˙ ) R n × n is Coriolis and centripetal matrix, G e ( q ) R n is gravitational matrix, and P d R n is external disturbance vector acting within the task environment. P e R n signifies the external force. In the absence of contact between the manipulator’s end-effector and the human or environment, the term P R n represents the manipulator control input. The computations involving matrices and vectors are outlined as follows:
M e ( q ) = Γ + T ( q ) M ( q ) Γ + ( q ) C e ( q , q ˙ ) = Γ + T ( q ) C ( q , q ˙ ) M ( q ) Γ + ( q ) Γ ˙ ( q ) Γ + ( q ) G e ( q ) = Γ + T ( q ) G ( q ) P d = Γ + T ( q ) τ f f d P e = Γ + T ( q ) τ f

2.4. Neural Network Approximation

Due to its straightforward architecture, robust nonlinear approximation capabilities, and strong generalization abilities, the RBFNN has attracted considerable interest from researchers in diverse fields and finds extensive application in function approximation. In general, an RBFNN can approximate any continuous function F ( δ ) as follows:
F ( δ ) = W T Ψ ( δ ) + ϱ
where δ as the NN’s input, W R υ as the weight vector, and Ψ ( δ ) = [ ψ 1 ( δ ) , ψ 2 ( δ ) , , ψ p ( δ ) ] T as the vector of radial basis functions. Here, ϱ denotes the bounded approximate error. Typically, Gaussian functions are commonly used as basis functions, characterized by
ψ i ( δ ) = exp δ μ i T δ μ i 2 σ T σ , i = 1 , 2 , , υ
where the parameter μ i is the center point of the radial basis function, and the parameter σ is the width of the radial basis function. The width σ dictates the rate at which the radial basis function decreases. The architecture of the RBFNN is depicted in Figure 4.
With the optimal weights W * , the RBFNN can accurately approximate the function F ( δ ) to arbitrary precision. This can be formulated as:
W * = arg min sup | F ^ ( δ | W ^ ) F ( δ ) |
where W ^ represents the estimated value of W . Given W * , we derive:
| F ^ ( δ | W ^ ) W * T Ψ ( δ ) | = | ϱ | ϱ ¯
Here, ϱ ¯ denotes a positive unknown constant vector.

3. RBFNN Based Robust Integral Sign Error Dynamic Robot Control

In the sequel, the overall neural adaptive RISE controller described in Section 2.4, Section 3.1, Section 3.2 and Section 3.3 will be referred to as the ARINNSE controller (adaptive RISE controller with neural-network error compensation). In short, the RISE structure provides the robust integral-of-sign term to attenuate bounded disturbances, while the RBF neural network compensates for unknown nonlinear dynamics in an adaptive fashion.

3.1. Filtered Tracking Error Dynamics

The tracking error e ( t ) , which quantifies the deviation between the actual and desired states of the system, is initially defined as
e = q q r
The filter tracking error is
γ = e ˙ + η e
where η is a positive constant matrix, ensuring that e ( t ) asymptotically approaches zero as γ ( t ) tends to zero.
By differentiating (12) with respect to time t and substituting (5), we derive
γ ˙ = q ¨ q ¨ r + η e ˙ = M 1 ( q ) τ τ f C ( q , q ˙ ) q ˙ + G ( q ) + τ d q ¨ r + η e ˙ = h ( Q ) + g ( Q ) U + d ( t )
where Q = q T , q ˙ T , q r T , q ˙ r T , q ¨ r T , U = τ τ f , d ( t ) = M 1 ( q ) τ d , g ( Q ) = M 1 ( q ) and h ( Q ) = M 1 ( q ) C ( q , q ˙ ) q ˙ G ( q ) q ¨ r + η e ˙ which is a nonlinear smooth function of Q. Define Q ¯ = q T , q r T , q ˙ r T , q ¨ r T , whose derivative is still available for feedback. Then, differentiating h ( Q ) with respect to time yields
h ˙ ( Q ) = h ( Q ) Q ¯ Q ¯ ˙ + h ( Q ) q ˙ q ¨ = h ( Q ) Q ¯ Q ¯ ˙ + h ( Q ) q ˙ M 1 ( q ) [ τ τ f τ d G ( q ) C ( q , q ˙ ) q ˙ ]
Since M ( q ) is a bounded and invertible matrix, where M min and M max represent its minimum and maximum singular values respectively, we proceed as follows:
Remark 2.
Given that q ¨ is not accessible, the time derivative of h ( Q ) suggests that its derivatives can be represented in terms of measurable signals Q, U , and the bounded yet inaccessible disturbance τ d .
Thus, differentiating (13) yields:
γ ¨ = h ˙ ( Q ) + g ˙ ( Q ) U + g ( Q ) U ˙ + d ˙
Furthermore, define R ¯ = γ ˙ + ξ γ , where ξ is a positive constant. This definition, combined with (15), gives
M ( q ) R ¯ ˙ = M ( q ) ( γ ¨ + ξ γ ˙ ) = M ( q ) [ h ˙ ( Q ) + g ˙ ( Q ) U + g ( Q ) U ˙ + d ˙ ] + ξ M ( q ) [ h ( Q ) + g ( Q ) U + d ] = 1 2 M ˙ ( q ) R ¯ γ + U ˙ ( t ) + M ( q ) d ˙ + ξ M ( q ) d + 1 2 M ˙ ( q ) γ ˙ + ξ 2 M ˙ ( q ) γ + γ + M ( q ) h ˙ ( Q ) + M ( q ) h ˙ ( Q ) U + ξ M ( q ) h ( Q ) + ξ U
then we have the following expression:
M ( q ) R ¯ ˙ = 1 2 M ˙ ( q ) R ¯ γ + U ˙ ( t ) + N ( α ) + J d
where
N ( α ) = M ( q ) h ( Q ) Q ¯ Q ¯ ˙ + M ( q ) h ( Q ) q ˙ ( h ( Q ) + g ( Q ) U ) + 1 2 M ( q ) Q ¯ Q ¯ ˙ + M ( q ) q ˙ ( h ( Q ) + g ( Q ) U ) ( h ( Q ¯ ) + g ( Q ) U + ξ γ ) + γ + ξ M ( q ) h ( Q ) + ξ U
and
J d = M ( q ) d ˙ + ξ M ( q ) d + M ( q ) h ( Q ) q ˙ d + 1 2 M ( q ) q ˙ d ( h ( Q ) g ( Q ) U + ξ γ ) + 1 2 M ( q ) Q ¯ Q ¯ ˙ + M ( q ) Q ¯ ( h ( Q ) + g ( Q ) U + d ) d
with
α Q T U T T
Remark 3.
The definitions of N ( α ) and J d aim to separate the unknown dynamics into 2 distinct components. Hence, the disturbance d ( t ) is outside the scope of N ( α ) , enabling its estimation and compensation using an NN, as elaborated in Section 2.4.

3.2. RBFNN Approximation

Note that (18) describes an unspecified smooth nonlinear function assuming h ( · ) and g ( · ) are also smooth functions. As discussed in Section 2.4, this auxiliary function can be effectively represented using a RBFNN:
N ( α ) = W T Ψ ( α ) + ϱ
In contrast to previous NN control frameworks, the control input U can also serve as an input to the NN, facilitating the approximation of the auxiliary function N ( α ) . In this paper, the controller will be designed with U ˙ rather than U as the new state of this system.
By leveraging (21), the system dynamics in (16) can be reformulated as:
M ( q ) R ¯ ˙ = 1 2 M ˙ ( q ) R ¯ γ + U ˙ + W T Ψ ( α ) + J d + ϱ
The NN target weights W is unknown, and N ^ ( α ) is defined as the NN approximation of N ( α ) , with the expression:
N ^ ( α ) = W ^ T Ψ ( α )
where W ^ represents the estimated NN weight, and the input is α .

3.3. Controller Design

The control law is designed as
U ˙ = ( k s + 1 ) γ ˙ W ^ T Ψ ( α ) ( k s + 1 ) ξ γ β ( t ) sign ( γ )
where the control gain as k s > 0 , β ( t ) is an adaptive term, which its exact form will be detailed later, and sign ( · ) signifies the Sign function.
Utilizing NNs to approximate a function that depends on the control signal U can lead to circular design issues if the NN output directly affects U [38]. However, in this paper, we mitigate this problem by constructing a first order dynamic system in (24) that includes U . This approach enables the integration of the control signal into the NN without encountering circular dependencies. In practical implementation, (24) facilitates straightforward realization of the control signal utilizing NN. Furthermore, we rigorously demonstrate the boundedness in subsequent sections and validate it through simulation studies.
To proceed, the closed-loop tracking error system dynamics are derived by substituting (24) into (22):
M ( q ) R ¯ ˙ = 1 2 M ˙ ( q ) R ¯ γ W ˜ T Ψ ( α ) ( k s + 1 ) ξ R ¯ β sign ( γ ) + J d ¯ ( t )
Here, J d ¯ J d + ϱ and W ˜ W ^ W represent the weight errors.
Remark 4.
J d is continuously differentiable and α ˙ is bounded, according to the mean value theorem both J d and J d ˙ ( t ) are bounded above. Specifically, this implies J d J d m 0 and J d ˙ ( t ) J d m 1 . Notably, the exact values of J d m 0 and J d m 1 are not required for analysis.
To further clarify, we establish that J d ¯ ( t ) J d ¯ m 0 and J d ¯ ˙ ( t ) J d ¯ m 1 . Moving forward, the NN weight tuning law is defined as:
W ^ ˙ = k n ξ Ψ ( α ) γ T k n k w | γ | W ^
Alternatively, it can be expressed as an integral equation:
W ^ = k n 0 t ξ ψ ( α ( τ ) ) γ T ( τ ) k w | γ ( τ ) | W ^ ( τ ) d τ
Here, k n and k w R + are positive user-defined parameters, and | · | denotes the 1 norm. Additionally, the adaptive term β is formulated as:
β = 0 t γ ˙ T ( τ ) sign ( γ ) d τ + ξ 0 t γ T ( τ ) sign ( γ ) d τ = i = 1 n 0 t γ ˙ i ( τ ) sign ( γ i ) d τ + ξ 0 t | γ ( τ ) | d τ = i = 1 n γ i ( 0 ) γ i ( t ) sign ( γ i ) d γ i + ξ 0 t | γ ( τ ) | d τ
where γ = [ γ 1 γ 2 γ n ] T . Suppose γ i changes sign at countable instances t i j ,   i = 1 , , n , j = 1 , , n i j 1 , where n i j 1 , i.e., γ i ( t i j ) = 0 , and sign ( γ i ( t 1 ) ) = sign ( γ i ( t 2 ) ) for all t 1 , t 2 ( t i j , t i ( j + 1 ) ) . Conveniently defining t i 0 = 0 and t i n i j = t for all i = 1 , , n , we obtain:
β = i = 1 n γ i ( 0 ) γ i ( t ) sign ( γ i ) d γ i + ξ 0 t | γ ( τ ) | d τ = i = 1 n j = 0 n i j 1 γ i ( t i j ) γ i ( t i ( j + 1 ) ) sign ( γ i ) d γ i + ξ 0 t | γ ( τ ) | d τ = i = 1 n | γ i ( t ) | | γ i ( 0 ) | + ξ 0 t | γ ( τ ) | d τ = | γ ( t ) | | γ ( 0 ) | + ξ 0 t | γ ( τ ) | d τ
Remark 5.
The time-varying control gain β adjusts independently of the NN reconstruction error and disturbance bounds. It incrementally increases to effectively account for both disturbances and NN reconstruction errors within the design framework. To enhance robustness, a virtual dead-zone operator is incorporated to prevent β from excessively escalating.
Mathematically
β = γ ( 0 ) γ ( t ) sign ( γ ) d γ + ξ 0 t | Z [ γ ( τ ) ] | d τ
where
Z [ γ ( t ) ] = 0 , | γ ( t ) | ϵ γ ( k ) , | γ ( t ) | > ϵ .
Define N B ( W ˜ , α ) = W ˜ T Ψ ( α ) . It can be shown that N B ( W ˜ , α ) is continuous and differentiable. Referring to (22) and considering W W m , Ψ Ψ m 0 , ψ ˙ ψ m 1 , and the boundedness of α ˙ , it follows that N B ( W ˜ , α ) and its derivative are bounded
N B N m 0
N ˙ B = W ˜ ˙ T Ψ ( α ) + W ˜ T Ψ ˙ ( α ) α ˙ N m 1
with N m 0 and N m 1 being positive unknown constants.

4. Simulation

The dynamic and control parameters used in the simulation are selected based on widely adopted benchmark models of two-link manipulators [19,39]. The arm dynamics follow the standard rigid-body model with ideal joint actuators and full-state feedback, and the controller is implemented in continuous time and numerically integrated with a sufficiently small fixed step size. In this study, sensor measurements are assumed to be noise-free, communication channels are modeled without delay or jitter, and actuator saturation and anti-windup mechanisms are not explicitly included. These idealized assumptions allow us to isolate and highlight the intrinsic properties of the proposed ARINNSE controller and its comparison with SMC and classical RISE, while more realistic implementation issues such as sampling effects, quantization, time delay, and torque limits will be addressed in future experimental work.
The coefficients b 1 , b 2 , b 3 , and g correspond to the standard inertia and gravity parameters for this robot model. The controller gains k s , k w , k n , ξ , and η were initially chosen according to the stability constraints derived in Section 3.3 and then fine-tuned through simulation trials to balance transient response and control effort. This parameter selection ensures both stability and practical realizability of the proposed method. The dynamic equations governing the arm’s motion are given by:
A 1 q 1 ¨ q 2 ¨ + A 2 + A 3 + d 1 d 2 = τ 1 τ 2 τ f τ f
where
A 1 = b 1 + b 2 + 2 b 3 cos q 2 b 2 + b 3 cos q 2 b 2 + b 3 cos q 2 b 2
and
A 2 = b 3 ( 2 q 1 ˙ + q 2 ˙ 2 ) sin q 2 b 3 q 1 ˙ 2 sin q 2
with
A 3 = b 1 g cos q 1 + b 3 g cos ( q 1 + q 2 ) b 3 g cos ( q 1 + q 2 )
Here, b 1 = 3.473 , b 2 = 0.196 , b 3 = 0.242 , and g = 9.8   m / s 2 . The joint angles q 1 and q 2 define the state of the robotic arm, and τ 1 and τ 2 are the applied torques on the joints, while τ f represents external disturbances.
For human–robot interaction, an admittance model is employed to ensure smooth interaction. The admittance control law is designed to govern the robot’s response to external forces exerted by the human operator. Specifically, the robot’s end-effector compliance is regulated such that it follows the intended trajectory while absorbing external forces within specified limits. The admittance parameters are adjusted to accommodate varying interaction forces, thereby enhancing safety and performance during collaborative tasks.
In our simulations, we validate the effectiveness of the proposed control strategy by evaluating the arm’s trajectory tracking performance under different interaction scenarios. The results demonstrate robustness against disturbances and accurate trajectory following, illustrating the practical applicability of the designed admittance-based control framework in human–robot interaction scenarios.

4.1. Tracking Performance Evaluation

To evaluate the effectiveness of the proposed ARINNSE controller, a comparative study was conducted against two classical control strategies: sliding mode control (SMC) and the standard RISE controller. Under ideal conditions without external interaction, the trajectory tracking performance of each controller for the robotic manipulator’s end-effector was investigated. The desired trajectory of the manipulator system was set as q d = [ sin ( t ) ; cos ( t ) ] , starting from the initial position q 0 = [ 0 ; 1 ] .
The parameters of the proposed ARINNSE controller were selected as follows: k s = 20 , k w = 1 , k n = 1 , ξ = 20 , and η = [ 1.5 ; 1.5 ] . This selection was based on the gain tuning criteria derived from Lyapunov stability conditions in reference [40]. For comparison, the SMC controller parameters were set as λ s m c = [ 5 , 0 ; 0 , 5 ] and β s m c = 1.5 , following the sliding surface and switching gain design method outlined in reference [41]. The RISE controller was configured with k s 1 = 20 , β c 1 = 5.8 , ξ 1 = 20 , and η 1 = [ 1.5 ; 1.5 ] , in accordance with the parameter configuration principles recommended under the robust integral of the sign of the error framework in reference [36]. All controller parameters were systematically tuned to ensure fairness in comparison and reliability of the results.
Figure 5 and Figure 6 illustrate the position tracking of joints 1 and 2, respectively, in joint space. The black solid lines represent the desired reference trajectories, while the yellow, blue, and red dashed lines correspond to the trajectories achieved using the SMC, RISE, and ARINNSE controllers, respectively. The comparison shows that all controllers achieve excellent trajectory tracking, with the actual motion trajectories closely matching the desired ones throughout the entire duration. Notably, the ARINNSE controller exhibits performance nearly identical to that of the RISE controller, with both offering highly accurate tracking with negligible phase delay and amplitude deviation. The SMC controller, while still effective, shows slightly more deviation, particularly in the oscillatory peaks, suggesting that it is less robust to system uncertainties and disturbances compared to the ARINNSE and RISE controllers. Both joints exhibit precise tracking, with the small transient deviations observed in the initial 1–2 s rapidly diminishing. This quick convergence indicates that the controllers effectively compensate for the initial mismatch and unmodeled dynamics. The transient deviations observed in the SMC and RISE controllers are significantly larger compared to the ARINNSE controller, further supporting the superior robustness of ARINNSE.
Figure 7 and Figure 8 compare the tracking errors for joints 1 and 2, respectively. The tracking error, defined as the difference between the desired trajectory and the actual joint position, provides a clear measure of control performance. For joint 1 (Figure 7), the ARINNSE controller shows the smallest tracking error, particularly after the initial transient phase. The error magnitude decreases sharply within the first few seconds and stabilizes to a very small neighborhood around zero. In contrast, the SMC controller exhibits the largest error initially, which persists for a longer duration, while the RISE controller exhibits a slower error convergence compared to ARINNSE. Similarly, for joint 2 (Figure 8), the ARINNSE controller demonstrates superior performance in minimizing tracking error, with the RISE controller closely following. The SMC controller again exhibits the highest tracking error, especially in the early stages of the trajectory. The rapid error decay for both joints highlights the effectiveness of the RISE term in suppressing bounded disturbances and residual neural reconstruction errors, while the ARINNSE controller further benefits from online adaptation via the RBFNN to compensate for model uncertainties. The steady-state errors of both joints remain below 0.01 rad, confirming the exceptional precision and robustness of the proposed control law.
Figure 9 and Figure 10 present the control inputs for joints 1 and 2, respectively. These figures reveal that all three controllers—SMC, RISE, and ARINNSE—require substantial control efforts, but with important differences in control smoothness. The ARINNSE and RISE controllers exhibit similar control input magnitudes, remaining smooth and bounded throughout the trajectory tracking process. Notably, there are no high-frequency oscillations or saturation observed, indicating that both controllers are well-designed to maintain smooth control while ensuring effective tracking. In contrast, the SMC controller displays more oscillatory behavior, which is characteristic of sliding mode control, especially under high-frequency disturbances. These oscillations result in higher control effort fluctuations, which may indicate less efficiency in maintaining smooth control.
From the results presented in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, it is evident that the ARINNSE controller outperforms both the SMC and RISE controllers in terms of tracking accuracy, error suppression, and stability. Although the RISE controller delivers comparable performance, the ARINNSE controller demonstrates a more efficient response, particularly in reducing tracking error and minimizing control effort. The SMC controller, while still effective, shows higher tracking error and greater oscillations in control input, which could limit its applicability in high-precision tasks. These findings highlight the effectiveness of the ARINNSE controller for applications requiring precise control, where both tracking accuracy and minimal control effort are crucial.
To comprehensively evaluate the controller’s performance, we utilize defined performance indices to assess the quality of the proposed control strategy. Let T f denote the total running time of the simulation.
(1) E 2 a [ e ] : This metric quantifies the E 2 a norm of the tracking error e over the entire duration T f :
E 2 a [ e ] = 1 T f 0 T f | e ( t ) | 2 d t 1 / 2
E 2 a [ e ] provides a numerical measure of the average tracking error performance.
(2) E m a : This index represents the maximum absolute tracking error during the last 2 s of the simulation:
E m a = max T f 2 t T f | e ( t ) |
E m a serves as an indicator of the final accuracy of the trajectory tracking.
(3) E 2 a [ U ] : It evaluates the E 2 a norm of the control input U over T f :
E 2 a [ U ] = 1 T f 0 T f | U ( t ) | 2 d t 1 / 2
E 2 a assesses the average magnitude of the control effort required by the controller.
These performance indices provide objective measures to assess the robustness and efficiency of the proposed control method in achieving accurate trajectory tracking and managing control effort throughout the simulation duration.
Table 1 and Table 2 present the three metrics for Joint 1 and Joint 2 respectively. Based on the trajectory-tracking indices reported in Table 1 and Table 2, a comparative assessment of the three controllers shows the following. For the joint-1 task, the ARINNSE controller exhibits a clear advantage: its root-mean-square tracking error is reduced by approximately 51.7% and 42.3% relative to SMC and RISE, respectively; its maximum absolute error is also the smallest among all methods, while the control-energy index remains essentially identical to that of RISE. These outcomes substantiate the effectiveness of the RBFNN-based adaptive mechanism in handling complex nonlinear dynamics. In particular, under strong inter-joint coupling, the neural approximator accurately captures unmodeled dynamics, thereby markedly improving tracking accuracy. By contrast, in the joint-2 test, SMC attains the best RMS error, outperforming both ARINNSE and RISE. This result highlights the inherent strength of classical sliding-mode control in systems with relatively simple dynamics.
Overall, the observed performance is distinctly joint-dependent, reflecting the underlying complexity and heterogeneity of the manipulator dynamics. ARINNSE excels on joint 1, where nonlinear effects are pronounced, demonstrating the adaptability of intelligent control in challenging regimes; SMC retains its traditional advantage on joint 2, evidencing its structural robustness in simpler regimes. These findings suggest that, for multi-joint manipulators, a dynamics-aware hybrid control architecture—assigning the most suitable controller to each joint according to its dynamics—may yield superior system-level performance by enhancing accuracy while preserving stability.

4.2. Human–Robot Interaction Test

In this section, the robotic manipulator is engaged in an emulated physical human–robot interaction task. The cooperative task is defined as guiding the end-effector to follow a smooth desired Cartesian trajectory x d ( t ) in a shared workspace while a human partner applies time-varying interaction forces. In the present simulation-only study, the human influence is modeled as a bounded disturbance d ( t ) = [ 10 sin ( t ) , 10 sin ( t ) ] T acting on the joints, which plays the role of an exogenous interaction torque rather than a sensor-measured contact force. The operator’s intention is encoded at the task level through the desired trajectory x d ( t ) , while the admittance model reshapes this reference in response to the synthetic interaction force τ f , thereby representing an assistive cooperation mode in which the robot yields compliantly to external forces but still aims to accomplish the nominal task. The initial joint states are chosen as q 1 ( 0 ) = 0.5 and q 2 ( 0 ) = 0.9 . The control gains are set as k s = 100 , k w = 1 , k n = 1 , ξ = 100 , and η = [ 1.5 , 1.5 ] , and the admittance parameters are M d = diag ( 10 , 10 ) , C d = diag ( 10 , 10 ) , and G d = diag ( 1 , 1 ) . All state variables q 1 , q 2 , q ˙ 1 , q ˙ 2 and the interaction torque τ f are assumed to be exactly known from the model, and no specific sensor, signal-conditioning electronics, or calibration procedure is implemented in this work; these aspects will be realized on a physical platform in our future experimental validation. The objective is to guide the robot arm to track a desired trajectory defined by:
x d ( 1 ) = 0.2768 1.4795 × 10 5 t 3 + 7.3975 × 10 7 t 4 9.8634 × 10 9 t 5
x d ( 2 ) = 0.4065 + 2.1242 × 10 5 t 3 1.0621 × 10 6 t 4 + 1.4162 × 10 8 t 5
Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 are the simulation results of the robotic arm under admittance control. During the simulation, an external force acts on the robot arm from 10 to 15 s, causing deviation from the desired trajectory. Once the external force is removed, the admittance control enables gradual convergence of the reference trajectory X r towards the desired trajectory X d , achieving alignment over time. Subsequently, the robot arm follows the reference trajectory to complete the task.

4.2.1. Tracking Performance in Joint and Cartesian Space

Joint Space: Figure 11 and Figure 12 show the tracking performance in joint space, where the blue, red and black lines indicate the actual, reference and desired trajectories, respectively.
Cartesian Space: Figure 13 and Figure 14 display the tracking performance in Cartesian space, with similar color coding for actual, reference, and desired trajectories.

4.2.2. Tracking Error and Stability

Error Convergence: Figure 15 demonstrates that the tracking error converges to zero, highlighting the asymptotic performance of the system.

4.2.3. NN Weights and Adaptive Term

NN Weights and Adaptive Term: Figure 17 and Figure 18 provids insights into the norms of NN weights and the adaptive term β , highlighting their behaviors over time.

4.2.4. Control Signal Boundedness

Control Signals: Figure 16 depicts the control signals, confirming their bounded nature.
Based on the above analysis, it can be seen that the control framework designed in this closed-loop system not only constrains all signals well, but also enables the system to achieve superb tracking performance.
Table 3 provides a summary of the key performance metrics from the human–robot interaction test, thereby validating the effectiveness of the proposed approach in practical scenarios.
It should be emphasized that the above human–robot interaction test corresponds to a single, representative cooperative reaching task under nominal admittance and controller parameters. This setting is intended as a baseline scenario to isolate the effect of the proposed ARINNSE controller and to enable a fair comparison with existing methods. A systematic robustness study under multiple task profiles (e.g., different reference trajectories and payload conditions), varying environment stiffness/damping, and intentional perturbations of controller gains is beyond the scope of this paper and will be carried out in our future experimental campaign on a physical human–robot interaction platform.

5. Conclusions and Future Work

This paper presents an admittance-based neural adaptive RISE control framework for a two-link manipulator operating in a physical human–robot interaction (pHRI) setting. By combining an outer-loop admittance model with an inner-loop neural adaptive RISE controller, the proposed framework enables compliant motion while compensating for unknown nonlinear dynamics and bounded interaction torques. The neural network term adaptively approximates lumped uncertainties, while the RISE structure provides strong robustness and finite-time convergence properties. Lyapunov-based analysis has shown that all closed-loop signals remain bounded and that the tracking error converges to a small neighborhood of the origin under suitable gain selection. Numerical simulations demonstrate improved tracking performance and disturbance rejection compared to conventional RISE and sliding-mode baseline controllers, without sacrificing control effort.
However, several important limitations of the present study must be acknowledged. First, all results are based on idealized numerical simulations of a standard two-link manipulator model, where sensor measurements are assumed to be noise-free, communication channels are modeled without delay or jitter, and actuator saturation and anti-windup mechanisms are not explicitly considered. Second, the human influence is modeled as a bounded exogenous interaction torque, and the cooperative task is limited to a single reference-tracking scenario in a fixed environment. In this sense, the operator’s intention is encoded indirectly through the desired trajectory and synthetic interaction torque profiles, and no explicit intention inference or task-level goal estimation module is included in the framework. Third, only an assistive admittance mode is considered, with no systematic design for manual, assistive, and fully autonomous operation modes or the associated safety and takeover policies. From a learning perspective, the neural network adaptation law is analyzed under implicit persistence-of-excitation assumptions, while weight regularization mechanisms and a quantitative characterization of the trade-off between learning speed, robustness margins, and interaction safety are not yet systematically examined.
These limitations also suggest several directions for future work. On the experimental side, we plan to implement the proposed controller on a physical pHRI platform equipped with force/torque sensing, signal-conditioning electronics, and calibration procedures. We will investigate sampled-data implementation issues including sampling rate, communication delay/jitter, encoder quantization, actuator saturation, and appropriate anti-windup schemes. On the control design side, we intend to extend the framework to multiple cooperation modes with explicit safety constraints, for instance, via control barrier functions or related safety filters. Additionally, we will incorporate explicit operator-intent inference modules based on measured interaction forces, motion, and potentially additional modalities. On the learning side, future work will incorporate explicit regularization mechanisms (e.g., projection operators or σ -modification), analyze practical excitation conditions for typical cooperative tasks, and relate adaptation gains and exploration richness to safety indices such as peak interaction force and saturation margins. Finally, beyond the objective tracking, force, and control-effort indices reported in this paper, future experimental studies will explicitly incorporate user-centered outcome measures, including perceived comfort, mental and physical workload (e.g., NASA-TLX), and subjective ratings of safety and trust, so as to more comprehensively evaluate the proposed controller in realistic pHRI scenarios.

Author Contributions

Conceptualization, S.C., L.J. and K.B.; methodology, K.B. and Y.C.; software, X.X. and G.J.; validation, S.C., L.J. and Y.L.; formal analysis, K.B. and Y.C.; investigation, X.X. and G.J.; resources, S.C. and L.J.; data curation, X.X. and G.J.; writing—original draft preparation, S.C. and K.B.; writing—review and editing, L.J., Y.C. and Y.L.; visualization, X.X. and G.J.; supervision, K.B. and Y.C.; project administration, S.C. and L.J.; funding acquisition, K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Program of Jiangsu Administration for Market Regulation under grant number KJ2025006, and Jiangsu Funding Program for Excellent Postdoctoral Talent, and the Open Project of Sichuan Provincial Key Laboratory of Special Environment Robotics Technology, Southwest University of Science and Technology under grant number 23kftk03.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Y.; Wang, H.; Fan, Q. Adaptive learning and sliding mode control for a magnetic microrobot precision tracking with uncertainties. IEEE Robot. Autom. Lett. 2023, 8, 7767–7774. [Google Scholar] [CrossRef]
  2. Zeng, C.; Yang, C.; Chen, Z. Bio-inspired robotic impedance adaptation for human-robot collaborative tasks. Sci. China Inf. Sci. 2020, 63, 170201. [Google Scholar]
  3. Li, X.; Lu, Q.; Chen, J.; Jiang, N.; Li, K. A Robust Region Control Approach for Simultaneous Trajectory Tracking and Compliant Physical Human–Robot Interaction. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 6388–6400. [Google Scholar] [CrossRef]
  4. Liu, Y.; Li, Z.; Liu, H.; Kan, Z. Skill transfer learning for autonomous robots and human–robot cooperation: A survey. Robot. Auton. Syst. 2020, 128, 103515. [Google Scholar] [CrossRef]
  5. Jiang, Z.; Wang, Z.; Lv, Q.; Yang, J. Impedance Learning-Based Hybrid Adaptive Control of Upper Limb Rehabilitation Robots. Actuators 2024, 13, 220. [Google Scholar] [CrossRef]
  6. Selvaggio, M.; Cognetti, M.; Nikolaidis, S.; Ivaldi, S.; Siciliano, B. Autonomy in physical human-robot interaction: A brief survey. IEEE Robot. Autom. Lett. 2021, 6, 7989–7996. [Google Scholar] [CrossRef]
  7. Ye, D.; Yang, C.; Jiang, Y.; Zhang, H. Hybrid impedance and admittance control for optimal robot—Environment interaction. Robotica 2024, 42, 510–535. [Google Scholar]
  8. Wang, Z.; Xu, H.; Jiang, R.; Zhou, Y.; Jiang, S.; Cheng, X.; Li, X.; He, B. Adaptive Variable Impedance Control in Physical Human-Robot Interaction based on Arm End Stiffness Estimation. IEEE Trans. Instrum. Meas. 2025, 74, 7510812. [Google Scholar]
  9. Ferraguti, F.; Talignani Landi, C.; Sabattini, L.; Bonfe, M.; Fantuzzi, C.; Secchi, C. A variable admittance control strategy for stable physical human—Robot interaction. Int. J. Robot. Res. 2019, 38, 747–765. [Google Scholar]
  10. Sharifi, M.; Zakerimanesh, A.; Mehr, J.K.; Torabi, A.; Mushahwar, V.K.; Tavakoli, M. Impedance variation and learning strategies in human-robot interaction. IEEE Trans. Cybern. 2021, 52, 6462–6475. [Google Scholar]
  11. Wu, M.H.; Ogawa, S.; Konno, A. Symmetry position/force hybrid control for cooperative object transportation using multiple humanoid robots. Adv. Robot. 2016, 30, 131–149. [Google Scholar] [CrossRef]
  12. Ngo, V.T.; Liu, Y.C. Adaptive Impedance and Admittance Controls for Physical Human-Robot Interaction with Force-Sensorless. In Proceedings of the 2024 American Control Conference (ACC), Toronto, ON, Canada, 10–12 July 2024; IEEE: New York, NY, USA, 2024; pp. 3791–3796. [Google Scholar]
  13. Hogan, N. Impedance control: An approach to manipulation. In Proceedings of the 1984 American Control Conference, San Diego, CA, USA, 6–8 June 1984; IEEE: New York, NY, USA, 1984; pp. 304–313. [Google Scholar]
  14. Ott, C.; Mukherjee, R.; Nakamura, Y. Unified impedance and admittance control. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, 3–8 May 2010; IEEE: New York, NY, USA, 2010; pp. 554–561. [Google Scholar]
  15. Liu, Y.; Li, Z.; Liu, H.; Kan, Z.; Xu, B. Bioinspired embodiment for intelligent sensing and dexterity in fine manipulation: A survey. IEEE Trans. Ind. Inform. 2020, 16, 4308–4321. [Google Scholar] [CrossRef]
  16. Guo, Y.; Peng, G.; Yang, C. Adaptive Admittance Control with Dynamical Systems for Optimized Motion Adaptation. In Proceedings of the 2024 International Conference on Advanced Robotics and Mechatronics (ICARM), Hyderabad, India, 31 July 2024; IEEE: New York, NY, USA, 2024; pp. 1032–1037. [Google Scholar]
  17. Abdullahi, A.M.; Haruna, A.; Chaichaowarat, R. Hybrid adaptive impedance and admittance control based on the sensorless estimation of interaction joint torque for exoskeletons: A case study of an upper limb rehabilitation robot. J. Sens. Actuator Netw. 2024, 13, 24. [Google Scholar] [CrossRef]
  18. Bascetta, L. A passivity-based adaptive admittance control strategy for physical human-robot interaction in hands-on tasks. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico, 20–24 August 2022; IEEE: New York, NY, USA, 2022; pp. 2267–2272. [Google Scholar]
  19. Slotine, J.J.E.; Li, W. On the adaptive control of robot manipulators. Int. J. Robot. Res. 1987, 6, 49–59. [Google Scholar] [CrossRef]
  20. Slotine, J.J.E.; Li, W. Composite adaptive control of robot manipulators. Automatica 1989, 25, 509–519. [Google Scholar] [CrossRef]
  21. Hsia, T. Adaptive control of robot manipulators-a review. In Proceedings of the 1986 IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 7–10 April 1986; IEEE: New York, NY, USA, 1986; Volume 3, pp. 183–189. [Google Scholar]
  22. Zhang, S.; Wu, Y.; He, X.; Wang, J. Neural Network-Based Cooperative Trajectory Tracking Control for a Mobile Dual Flexible Manipulator. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6545–6556. [Google Scholar] [CrossRef]
  23. Si, C.; Wang, Q.G.; Yu, J. Event-Triggered Adaptive Fuzzy Neural Network Output Feedback Control for Constrained Stochastic Nonlinear Systems. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 5345–5354. [Google Scholar] [CrossRef]
  24. Deng, W.; Zhou, H.; Zhou, J.; Yao, J. Neural Network-Based Adaptive Asymptotic Prescribed Performance Tracking Control of Hydraulic Manipulators. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 285–295. [Google Scholar] [CrossRef]
  25. Yu, X.; Hou, Z.; Polycarpou, M.M. Controller-dynamic-linearization-based data-driven ILC for nonlinear discrete-time systems with RBFNN. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 4981–4992. [Google Scholar] [CrossRef]
  26. Ren, X.; Liu, Y.; Hu, Y.; Li, Z. Integrated Task Sensing and Whole Body Control for Mobile Manipulation With Series Elastic Actuators. IEEE Trans. Autom. Sci. Eng. 2022, 20, 413–424. [Google Scholar] [CrossRef]
  27. Liu, Y.; Li, Z.; Su, H.; Su, C.Y. Whole-Body Control of an Autonomous Mobile Manipulator Using Series Elastic Actuators. IEEE/ASME Trans. Mechatron. 2021, 26, 657–667. [Google Scholar] [CrossRef]
  28. Fei, J.; Wang, T. Adaptive fuzzy-neural-network based on RBFNN control for active power filter. Int. J. Mach. Learn. Cybern. 2019, 10, 1139–1150. [Google Scholar] [CrossRef]
  29. Ge, S.; Hang, C.; Lee, T.; Zhang, T. Stable Adaptive Neural Network Control Design; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar] [CrossRef]
  30. Hayakawa, T.; Haddad, W.M.; Hovakimyan, N. Neural network adaptive control for a class of nonlinear uncertain dynamical systems with asymptotic stability guarantees. IEEE Trans. Neural Netw. 2008, 19, 80–89. [Google Scholar] [CrossRef]
  31. Chen, M.; Ge, S.S.; How, B.V.E. Robust adaptive neural network control for a class of uncertain MIMO nonlinear systems with input nonlinearities. IEEE Trans. Neural Netw. 2010, 21, 796–812. [Google Scholar] [CrossRef]
  32. Luo, J.; Zhang, C.; Si, W.; Jiang, Y.; Yang, C.; Zeng, C. A physical human–robot interaction framework for trajectory adaptation based on human motion prediction and adaptive impedance control. IEEE Trans. Autom. Sci. Eng. 2024, 22, 5072–5083. [Google Scholar] [CrossRef]
  33. Wu, Y.; Yue, D.; Dong, Z. Robust integral of neural network and precision motion control of electrical–optical gyro-stabilized platform with unknown input dead-zones. ISA Trans. 2019, 95, 254–265. [Google Scholar] [PubMed]
  34. Liu, C.; Zhao, K.; Si, W.; Li, J.; Yang, C. Neuroadaptive Admittance Control for Human–Robot Interaction With Human Motion Intention Estimation and Output Error Constraint. IEEE Trans. Cybern. 2025, 55, 3005–3016. [Google Scholar] [CrossRef]
  35. Lee, H.; Utkin, V.I. Chattering suppression methods in sliding mode control systems. Annu. Rev. Control 2007, 31, 179–188. [Google Scholar] [CrossRef]
  36. Yang, Q.; Jagannathan, S.; Sun, Y. Robust integral of neural network and error sign control of MIMO nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3278–3286. [Google Scholar] [CrossRef]
  37. Sciavicco, L.; Siciliano, B. Modelling and Control of Robot Manipulators; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  38. Wang, C.; Hill, D.J.; Ge, S.S.; Chen, G. An ISS-modular approach for adaptive neural control of pure-feedback systems. Automatica 2006, 42, 723–731. [Google Scholar] [CrossRef]
  39. He, W.; Xue, C.; Yu, X.; Li, Z.; Yang, C. Admittance-based controller design for physical human–robot interaction in the constrained task space. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1937–1949. [Google Scholar] [CrossRef]
  40. Huo, Y.; Li, P.; Chen, D.; Liu, Y.H.; Li, X. Model-free adaptive impedance control for autonomous robotic sanding. IEEE Trans. Autom. Sci. Eng. 2021, 19, 3601–3611. [Google Scholar] [CrossRef]
  41. Shtessel, Y.; Edwards, C.; Fridman, L.; Levant, A. Sliding Mode Control and Observation; Springer: Berlin/Heidelberg, Germany, 2014; Volume 10. [Google Scholar]
Figure 1. Human–robot compliant interaction motion model.
Figure 1. Human–robot compliant interaction motion model.
Electronics 14 04862 g001
Figure 2. Control scheme of the physical human–robot interaction.
Figure 2. Control scheme of the physical human–robot interaction.
Electronics 14 04862 g002
Figure 3. Physical interaction model.
Figure 3. Physical interaction model.
Electronics 14 04862 g003
Figure 4. The structure of the RBFNNs.
Figure 4. The structure of the RBFNNs.
Electronics 14 04862 g004
Figure 5. Tracking performances of joint 1 in joint space.
Figure 5. Tracking performances of joint 1 in joint space.
Electronics 14 04862 g005
Figure 6. Tracking performances of joint 2 in joint space.
Figure 6. Tracking performances of joint 2 in joint space.
Electronics 14 04862 g006
Figure 7. Tracking error of the joint 1.
Figure 7. Tracking error of the joint 1.
Electronics 14 04862 g007
Figure 8. Tracking error of the joint 2.
Figure 8. Tracking error of the joint 2.
Electronics 14 04862 g008
Figure 9. The control input of joint 1.
Figure 9. The control input of joint 1.
Electronics 14 04862 g009
Figure 10. The control input of joint 2.
Figure 10. The control input of joint 2.
Electronics 14 04862 g010
Figure 11. Tracking performances of the robotic arm joint 1 in joint space.
Figure 11. Tracking performances of the robotic arm joint 1 in joint space.
Electronics 14 04862 g011
Figure 12. Tracking performances of the robotic arm joint 2 in joint space.
Figure 12. Tracking performances of the robotic arm joint 2 in joint space.
Electronics 14 04862 g012
Figure 13. Tracking performances of the robotic arm joint 1 in Cartesian space.
Figure 13. Tracking performances of the robotic arm joint 1 in Cartesian space.
Electronics 14 04862 g013
Figure 14. Tracking performances of the robotic arm joint 2 in Cartesian space.
Figure 14. Tracking performances of the robotic arm joint 2 in Cartesian space.
Electronics 14 04862 g014
Figure 15. Tracking error of joint 1 and joint 2.
Figure 15. Tracking error of joint 1 and joint 2.
Electronics 14 04862 g015
Figure 16. The control input of joint 1 and joint 2.
Figure 16. The control input of joint 1 and joint 2.
Electronics 14 04862 g016
Figure 17. The NN weights of the designed controller.
Figure 17. The NN weights of the designed controller.
Electronics 14 04862 g017
Figure 18. The adaptive term β of the designed controller.
Figure 18. The adaptive term β of the designed controller.
Electronics 14 04862 g018
Table 1. Performance indexes for joint 1 tracking test.
Table 1. Performance indexes for joint 1 tracking test.
Methods E 2 a ( e ) (rad) E ma (rad) E 2 a ( U ) (N·m)
SMC0.1065 2.9 × 10 3 28.6725
RISE0.0893 5.3 × 10 3 28.1323
ARINNSE0.0515 2.2 × 10 3 28.0857
Table 2. Performance indexes for joint 2 tracking test.
Table 2. Performance indexes for joint 2 tracking test.
Methods E 2 a ( e ) (rad) E ma (rad) E 2 a ( U ) (N·m)
SMC0.0003 0.5 × 10 3 1.6721
RISE0.0352 0.4 × 10 3 1.6079
ARINNSE0.0204 0.3 × 10 3 1.6167
Table 3. Performance indexes of human–robot interaction test.
Table 3. Performance indexes of human–robot interaction test.
Arm Joints E 2 a ( e ) (rad) E ma (rad) E 2 a ( U ) (N·m)
10.0120 1.84 × 10 6 29.1672
20.0151 1.2661 × 10 4 7.2041
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, S.; Jiang, L.; Bai, K.; Chen, Y.; Xu, X.; Jiang, G.; Liu, Y. Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model. Electronics 2025, 14, 4862. https://doi.org/10.3390/electronics14244862

AMA Style

Chen S, Jiang L, Bai K, Chen Y, Xu X, Jiang G, Liu Y. Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model. Electronics. 2025; 14(24):4862. https://doi.org/10.3390/electronics14244862

Chicago/Turabian Style

Chen, Shengli, Lin Jiang, Keqiang Bai, Yuming Chen, Xiaoang Xu, Guanwu Jiang, and Yueyue Liu. 2025. "Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model" Electronics 14, no. 24: 4862. https://doi.org/10.3390/electronics14244862

APA Style

Chen, S., Jiang, L., Bai, K., Chen, Y., Xu, X., Jiang, G., & Liu, Y. (2025). Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model. Electronics, 14(24), 4862. https://doi.org/10.3390/electronics14244862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop