Next Article in Journal
Compact Four-Port MIMO Antenna Using Dual-Polarized Patch and Defected Ground Structure for IoT Devices
Next Article in Special Issue
Robust and Precise Navigation and Obstacle Avoidance for Unmanned Ground Vehicle
Previous Article in Journal
CNN-Based Automatic Tablet Classification Using a Vibration-Controlled Bowl Feeder with Spiral Torque Optimization
Previous Article in Special Issue
A ROS-Based Online System for 3D Gaussian Splatting Optimization: Flexible Frontend Integration and Real-Time Refinement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators

1
Department of ICT Convergence Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
2
Department of Electronic Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
3
Department of Artificial Intelligence and Information Technology, Sejong University, Seoul 05006, Republic of Korea
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(14), 4252; https://doi.org/10.3390/s25144252
Submission received: 23 May 2025 / Revised: 2 July 2025 / Accepted: 3 July 2025 / Published: 8 July 2025

Abstract

This study presents an adaptive sliding mode control strategy tailored for robotic manipulators, featuring a quasi-convex function-based control gain and a time-delay estimation (TDE) enhanced by neural networks. To compensate for TDE errors, the proposed method utilizes both the previous TDE error and radial basis function neural networks with a weight update law that includes damping terms to prevent divergence. Additionally, a continuous gain function that is quasi-convex function dependent on the magnitude of the sliding variable is proposed to replace the traditional switching control gain. This continuous function-based gain has effectiveness in suppressing chattering phenomenon while guaranteeing the stability of the robotic manipulator in terms of uniform ultimate boundedness, which is demonstrated through both simulation and experiment results.

1. Introduction

Robotic manipulators have been increasingly employed across various fields, including manufacturing [1], surgery [2], service industries [3], and agriculture [4], due to their high versatility. To operate effectively across diverse fields, robotic manipulators must exhibit high accuracy and reliability, which requires precise model information for their nonlinear dynamics. However, highly nonlinear properties, external disturbances, and inaccuracies in system modeling make it difficult to obtain precise model information. Consequently, researchers have explored diverse control approaches including time-delay control (TDC) [5,6,7,8] and sliding mode control (SMC) [5,9,10,11,12,13] to cope with these issues.
In real environments, obtaining exact dynamic equations of robotic manipulators is challenging work due to inaccuracies in system modeling and external disturbances. Thus, a TDC technique utilizes time-delay estimation (TDE) to approximate unpredictable disturbances and inaccuracies in system modeling of robotic manipulators [5,6,7,8]. The TDE uses previous accelerations and control torques for estimation because it is based on the assumption that an estimated value is the same as before when the sampling period is small enough. However, the sampling period cannot be reduced indefinitely due to limitations in communication speeds and hardware performance, which leads to TDE errors [5]. In [8], while an enhanced TDE structure was proposed to reduce the current TDE error by utilizing the previous one scaled by a tunable parameter, this parameter increase control complexity. Also, TDE errors have not been completely eliminated, requiring an additional compensation strategy. To better handle TDE errors, methods combining TDC and SMC were proposed in [5,8]. Further, neural networks (NNs) have been utilized with robust control strategies [14,15,16,17].
SMC is a robust control scheme widely recognized for its simplicity and effectiveness [9,10]. To compensate for TDE errors, SMC input utilizes an adjustable gain, which steers the sliding variable toward the sliding surface whenever the control gain exceeds the total system uncertainties. However, it is difficult work to precisely determine SMC gain satisfying such condition. Thus, adaptive sliding mode control (ASMC) that adjusts the SMC gain using an adaptive law dependent on the sliding variable has been proposed [5,11,12,13]. Generally, the adaptive update law raises SMC gain for the convergence of the sliding variable toward zero [11]. However, in real environments, the sliding variable can never become exactly zero [5,12], which leads to an overestimated ASMC gain and increased chattering phenomenon. To address this issue, adaptive laws have been proposed that reduce ASMC gain when the sliding variable converges to a specific region and increase it when the sliding variable deviates from this region [8,12]. However, adaptive laws that switch at specific boundaries may exacerbate chattering phenomenon due to an abrupt change in the control gain. Thus, a class K function-based gain that are convex or concave with respect to a magnitude of the sliding variable was proposed in [13]. Although the work [13] shows that the convex function-based gain outperforms the concave one in terms of control performance, the steep gradient of the convex function may result in an overestimated gain with exacerbated chattering phenomenon.
NNs have been actively explored in control system design [14,15,16,17,18,19] due to their ability to estimate unknown parameters by utilizing input–output data. In the works [14,15], NNs estimate the nonlinear dynamics and approximate overall uncertainties with the SMC scheme. In [16,17], NNs are employed to adaptively tune SMC gains. In [18,19], NNs are integrated with TDC to estimate inaccuracies in system modeling or approximate TDE errors. In particular, [18] introduced a radial basis function neural network (RBFNN) to estimate TDE errors. Although these studies demonstrated that the RBFNN enhances the control performance of TDC, they were limited to simulations and lacked consideration of applicability to robotic manipulators in real environments.
Based on the aforementioned discussions, this paper proposes an ASMC strategy tailored for robotic manipulators, featuring a quasi-convex function-based control gain and the TDE enhanced by RBFNN. The main contributions of this paper are summarized as follows.
(1)
To compensate for TDE errors, the proposed method utilizes both the previous TDE error and the RBFNN with a weight update law that includes damping terms to prevent divergence.
(2)
A continuous gain, designed as a quasi-convex function with respect to the magnitude of the sliding variable, is proposed to replace the traditional switching adaptive law. This function preserves the continuous gain and smooth transitions between convex and concave characteristics depending on the magnitude of the sliding variable.
(3)
The stability of the proposed control method is guaranteed in the sense of uniform ultimate boundedness, and its effectiveness is validated through both simulation and experiment results.
This paper is organized as follows. Section 2 introduces the system dynamics and preliminary definitions required for control design. Section 3 presents the proposed TDE enhanced by RBFNN and the quasi-convex function-based ASMC. In Section 4, simulation results on a 2-joint robotic manipulator are provided. Section 5 validates the performance of the proposed method through experiments on a real robotic manipulator. Finally, Section 6 concludes the study.
Notations: R , R n , and R n × m denote the set of real numbers, n-dimensional real vectors, and n × m real matrices, respectively. d i a g { · } denotes a diagonal matrix with the indicated elements. I n denotes an R n × n identity matrix. J n , m denotes a matrix in R n × m with entries that are all equal to one. s g n ( · ) represents the signum function. x ( t ) and x ( t ) denote the infinity norm and the Euclidean norm of the vector x ( t ) , respectively.

2. Preliminaries

Definition 1 
([20]). Given the sliding variable σ and a positive scalar δ, the real sliding surface is defined as
Σ = σ R n σ δ .
Definition 2 
([21]). A continuous function β : [ 0 , c ) [ 0 , ) is considered a class K function if it is strictly increasing and β ( 0 ) = 0 . If the domain [ 0 , c ) is extended to [ 0 , ) and β ( s ) as s , then β belongs to class K function.
Lemma 1 
([21]). Let β 1 be a class K function defined on [ 0 , c ) , and β 2 be a class K function. Define the inverse of β i as β i 1 , i = 1 , 2 . Then, the following statements hold:
  • β 1 1 is defined on [ 0 , β 1 ( c ) ) and belongs to class K .
  • β 2 1 is defined on [ 0 , ) and belongs to class K .
The dynamic equation of a robotic manipulator with n-joint under external disturbances can be described by the following expression.
H ( θ ( t ) ) θ ¨ ( t ) + C ( θ ( t ) , θ ˙ ( t ) ) θ ˙ ( t ) + G ( θ ( t ) ) + F ( θ ˙ ( t ) ) = τ ( t ) + τ d ( t ) .
Here, θ ( t ) , θ ˙ ( t ) , and θ ¨ ( t ) are vectors representing the positions, velocities, and accelerations of the joints, respectively, in R n . H ( θ ( t ) ) is the inertia matrix, and C ( θ ( t ) , θ ˙ ( t ) ) is the Coriolis matrix, in R n × n . The gravity force vector G ( θ ( t ) ) R n and the friction force vector F ( θ ˙ ( t ) ) R n account for gravitational and frictional effects acting on the robot joints. The control torque τ ( t ) R n and the unpredictable disturbance τ d ( t ) R n influence the system dynamics. By multiplying both sides of the Equation (2) by H 1 ( θ ( t ) ) and reorganizing it with respect to the acceleration θ ¨ ( t ) , the Equation (2) can be expressed as
θ ¨ ( t ) = H ¯ 1 τ ( t ) + N ( t ) .
Here, H ¯ = d i a g h ¯ 1 , h ¯ 2 , , h ¯ n R n × n represents a gain matrix and N ( t ) is given below.
N ( t ) = H ¯ 1 C ( θ ( t ) , θ ˙ ( t ) ) θ ˙ ( t ) + G ( θ ( t ) ) + F ( θ ˙ ( t ) ) τ d ( t ) H ¯ 1 H ( θ ( t ) ) H ¯ θ ¨ ( t ) .
It is difficult to precisely derive N ( t ) in real environments, as it exhibits nonlinear and time-varying characteristics. If the sampling period L is sufficiently small, N ( t ) can be expressed using the TDE as follows.
N ( t ) N ^ ( t ) = N ( t L ) = θ ¨ ( t L ) H ¯ 1 τ ( t L ) .
Using the TDE method (5), the control input τ 1 T D C ( t ) aiming to track the desired position θ d ( t ) is expressed by
τ 1 T D C ( t ) = H ¯ ( θ ¨ d ( t ) + ( 1 + 2 ) e ˙ ( t ) + 1 2 e ( t ) ) H ¯ N ( t L ) ,
where 1 , 2 are positive scalars and e ( t ) = θ d ( t ) θ ( t ) is a tracking error. Substituting (5) and (6) into (3), the following error dynamic equation can be obtained.
e ¨ ( t ) + ( 1 + 2 ) e ˙ ( t ) + 1 2 e ( t ) + ϕ ( t ) = 0 ,
where ϕ ( t ) N ( t ) N ( t L ) is the TDE error. Since the estimation error depends on L, the TDE error ϕ ( t ) is small and bounded when L is approximately zero as follows.
ϕ ( t ) ϕ * ,
where ϕ * is a positive constant [8]. In order to mitigate the current TDE error ϕ ( t ) , the work [8] introduces an improved TDC control input τ 2 T D C ( t ) defined as
τ 2 T D C ( t ) = H ¯ ( θ ¨ d ( t ) + ( 1 + 2 ) e ˙ ( t ) + 1 2 e ( t ) ) H ¯ ( N ( t L ) + α ϕ ( t L ) ) ,
where α is a tunable scalar. Then, substituting the TDC control torque τ 2 T D C ( t ) into the Equation (3), the following error dynamic equation can be obtained as follows.
e ¨ ( t ) + ( 1 + 2 ) e ˙ ( t ) + 1 2 e ( t ) + ϕ ( t ) α ϕ ( t L ) = 0 ,
where ϕ ( t L ) denotes a previous TDE error. In the Equation (10), the current TDE error ϕ ( t ) can be reduced by the previous TDE error ϕ ( t L ) if the tunable parameter α is properly selected. Then, to compensate for the following novel TDE error,
ϕ ˘ ( t ) = ϕ ( t ) α ϕ ( t L ) ,
the ASMC torque τ 1 A S M C ( t ) is given by the equation below.
τ 1 A S M C ( t ) = H ¯ ( θ ¨ d ( t ) + 1 e ˙ ( t ) + 2 σ ( t ) ) H ¯ ( N ( t L ) + α ϕ ( t L ) ) + H ¯ K ( σ ( t ) ) s g n ( σ ( t ) ) .
In this expression, σ ( t ) = e ˙ ( t ) + 1 e ( t ) is the sliding variable, and K ( σ ( t ) ) R n × n denotes an ASMC gain matrix associated with the sliding variable.

3. Proposed ASMC and TDE Enhanced by NNs

A quasi-convex function-based ASMC and a TDE enhanced by NNs are proposed in this paper, as illustrated in Figure 1. We have the following ASMC input.
τ 2 A S M C ( t ) = H ¯ ( θ ¨ d ( t ) + 1 e ˙ ( t ) + 2 σ ( t ) ) H ¯ ( N ( t L ) + Ξ ( t ) ϕ ( t L ) ) + H ¯ K ( σ ( t ) ) s g n ( σ ( t ) ) .
When the ASMC (13) is utilized in place of the existing ASMC (12), the TDE error (11) transforms into the following TDE error:
ϕ ˜ ( t ) = ϕ ( t ) Ξ ( t ) ϕ ( t L ) .
In this formulation, Ξ ( t ) = d i a g ξ 1 ( t ) , ξ 2 ( t ) , , ξ n ( t ) corresponds to the output of the proposed RBFNN, and K ( σ ( t ) ) = d i a g k 1 ( σ 1 ( t ) ) , k 2 ( σ 2 ( t ) ) , , k n ( σ n ( t ) ) . The proposed RBFNN architecture includes an input layer, a hidden layer with nonlinear activation functions, and an output layer. Its structure can be described as follows.
ξ i ( t ) = j = 1 ν ω i , j ( t ) ψ j U i ( t ) + b .
Here, ν denotes the number of hidden units. ω i , j ( t ) , U i ( t ) , and b represent the weights, the input vector, and the bias, respectively, for i = 1 , , n and j = 1 , , ν . The activation function ψ j ( U i ( t ) ) is constructed using a Gaussian RBF, formulated as
ψ j U i ( t ) = exp d ( U i ( t ) , M j ) 2 2 η 2 ,
where U i ( t ) = e i ( t ) θ d , i ( t ) θ ˙ d , i ( t ) is defined as an input vector containing the tracking error, the desired position, and its velocity of the i-th joint. d ( U i ( t ) , M j ) represents the squared Euclidean distance between the input vector U i ( t ) and the center vector M j , given by U i ( t ) M j . Here, M j = m j , 1 m j , 2 m j , 3 is the center vector of the Gaussian RBF, and η denotes its width parameter. The parameter η determines the width of activation range of the basis function. A smaller η results in a steeper variation of the function value near the center. The weight ω i , j ( t ) is updated using the following update law.
ω ˙ i , j ( t ) = ζ i | σ i ( t ) | ψ j ( U i ( t ) ) γ i ω i , j ,
where ζ i and γ i are positive scalars.
Remark 1.
Compared to the TDE in [8], which utilizes a fixed parameter α in (10), the TDE enhanced by NNs (13) utilizes tracking errors, desired positions, and their derivative as input data, enabling the estimation of Ξ ( t ) for the current pose of robotic manipulators. Additionally, compared to those in [18,22], the proposed RBFNN incorporates a damping term in its weight update law (17) to prevent weight divergence. The effectiveness of the proposed TDE is shown through both simulation and experiment results.
The proposed function-based gain is defined as follows.
k i ( σ i ( t ) ) = ρ ln 1 + σ i ( t ) λ 2 ,
where ρ and λ are positive tuning parameters. The parameter ρ determines the overall scaling of the control gain. The parameter λ serves as a normalization factor that determines the degree to which the gain responds to the sliding variable. A smaller λ yields a steeper gain increase for small errors, while a larger λ results in a more gradual change. The gain function k i ( σ i ( t ) ) belongs to class K in Definition 2 and satisfies the following properties:
  • The k i ( σ i ( t ) ) is continuous on [ 0 , ) .
  • The k i ( σ i ( t ) ) is at least C 0 with respect to | σ i ( t ) | on ( 0 , ) .
  • The inverse function of k i ( σ i ( t ) ) exists.
  • As σ i ( t ) approaches , the k i ( σ i ( t ) ) approaches .
Remark 2.
The continuous gain function (18) that is quasi-convex with respect to the magnitude of the sliding variable is proposed to replace the traditional switching control gain. The traditional switching-based adaptive law suffers from the drawback of inducing chattering phenomenon due to the discontinuous variation of the control gain. In contrast, the proposed function is a continuous function that belongs to class K , allowing the control gain to be adjusted smoothly and continuously according to the magnitude of the sliding variable, thereby effectively mitigating this issue. Also, this function preserves smooth transitions between convex and concave characteristics depending on the magnitude of the sliding variable. As illustrated in Figure 2, a convex function has a steep gradient as the sliding variable increases, whereas a concave one has a steep gradient near the origin but becomes flatter as the sliding variable increases. Therefore, convex and concave function-based gains have drawbacks at large sliding variable values and near the origin, respectively. In contrast, the proposed quasi-convex function-based gain is convex near the origin and concave as the sliding variable increases. The effectiveness of the proposed function is verified via tracking performance and chattering phenomenon observed in the control torque.
  • To investigate the stability of the system (2) under the ASMC (13), the following Lyapunov function is introduced.
L ( σ ( t ) ) = 1 2 σ T ( t ) σ ( t ) .
Using the Lyapunov function (19), the main result can be formulated as follows.
Theorem 1.
For positive constants ρ, λ, and any initial sliding variable σ ( 0 ) , if the proposed control torque (13) is applied to the system (2), then the sliding variable σ ( t ) R n is uniformly ultimately bounded in the sense that
σ ( t ) δ max ,
where
δ max max 1 i n δ i , i = 1 , 2 , , n ,
δ i = λ exp ϕ ¯ i ρ 1 .
Here, ϕ ˜ ( t ) = ϕ ˜ 1 ( t ) ϕ ˜ 2 ( t ) ϕ ˜ n ( t ) T denotes the TDE error and ϕ ¯ i presents an upper bound of ϕ ˜ i ( t ) in (14).
Proof. 
The derivative of the Lyapunov function (19) with respect to time is computed as follows.
d d t L ( σ ( t ) ) = σ T ( t ) σ ˙ ( t ) = σ T ( t ) ( θ ¨ d ( t ) + 1 e ˙ ( t ) H ¯ 1 τ 2 A S M C ( t ) N ^ ( t ) ) = i = 1 n { ( 2 σ i ( t ) ϕ i ( t ) + ξ i ( t ) ϕ i ( t L ) k i ( σ i ( t ) ) s g n ( σ i ( t ) ) ) σ i ( t ) } = i = 1 n { 2 σ i 2 ( t ) ϕ ˜ i ( t ) σ i ( t ) k i ( σ i ( t ) ) | σ i ( t ) | } i = 1 n { 2 σ i 2 ( t ) ( k i ( σ i ( t ) ) ϕ ¯ i ) | σ i ( t ) | } .
The derivative of the Lyapunov function (19) with respect to time is guaranteed to be negative if the gain function satisfies k i ( σ i ( t ) ) > ϕ ¯ i . Since k i ( σ i ( t ) ) is strictly increasing and belongs to class K , its inverse k i 1 ( σ i ( t ) ) also exists and belongs to the same class, as established in Lemma 1. Accordingly, one can define the threshold δ i such that k i ( δ i ) = ϕ ¯ i , which yields δ i = k i 1 ( ϕ ¯ i ) . Thus, if | σ i ( t ) | > δ i for each joint, the inequality k i ( σ i ( t ) ) > ϕ ¯ i holds, ensuring that d d t L ( σ ( t ) ) < 0 . This result guarantees that σ i ( t ) converges to within the bound δ i . □
Remark 3.
In the works [5,8], ASMC gain is updated according to the infinity norm of the sliding variable. Since such an adaptive law is mainly sensitive to the largest-magnitude element in the sliding variable or in the TDE error, there may exist conservative gain selection for the other joints except for the joint associated with the largest-magnitude element. In contrast, the proposed quasi-convex function-based gain tailored to individual joint behavior enables less conservative gain selection and individual bound (22) corresponding to each element of the sliding variable.

4. Simulation

4.1. Simulation Setup

The simulation is performed based on the dynamic model (2) of a 2-joint robotic manipulator, given by
H ( θ ( t ) ) = l 2 2 h 2 + 2 ( l 1 l 2 h 2 ) c o s ( θ 2 ( t ) ) + l 1 2 ( h 1 + h 2 ) l 2 2 h 2 + ( l 1 l 2 h 2 ) c o s ( θ 2 ( t ) ) l 2 2 h 2 + ( l 1 l 2 h 2 ) c o s ( θ 2 ( t ) ) l 2 2 h 2 , C ( θ ( t ) , θ ˙ ( t ) ) θ ˙ ( t ) = ( h 2 l 1 l 2 ) s i n ( θ 2 ( t ) ) { θ ˙ 2 2 ( t ) + 2 θ ˙ 1 ( t ) θ ˙ 2 ( t ) } ( h 2 l 1 l 2 ) s i n ( θ 2 ( t ) ) θ ˙ 2 2 ( t ) , G ( θ ( t ) ) = h 1 l 1 g c o s ( θ 1 ( t ) ) + h 2 g ( l 1 c o s ( θ 1 ( t ) ) + l 2 c o s ( θ 1 ( t ) + θ 2 ( t ) ) h 2 l 2 g c o s ( θ 1 ( t ) + θ 2 ( t ) ) , F ( θ ˙ ( t ) ) = f c 1 s g n ( θ ˙ 1 ( t ) + f v 1 θ ˙ 1 ( t ) ) f c 2 s g n ( θ ˙ 2 ( t ) + f v 2 θ ˙ 2 ( t ) ) ,
where θ ( t ) = θ 1 ( t ) θ 2 ( t ) T , and θ i ( t ) represents the joint position of i-th joint. The simulation model and parameters are adopted from [8] for consistency and comparability. In the system matrices (24), scalars are defined as h 1 = 9 [kg], h 2 = 6 [kg], l 1 = 0.4 [m], l 2 = 0.2 [m], and g = 9.81 [m/s2]. The friction coefficients are f v 1 = 10 [N· m · s], f c 1 = 10 [N· m · s], f v 2 = 10 [N· m · s], and f c 2 = 10 [N· m · s]. The parameters used in the control torque are configured as L = 0.001 [s], 1 = 30 , 2 = 5 , H ¯ = d i a g { 0.08 , 0.04 } , and α = 0.3 . The unpredictable disturbance is described by τ d ( t ) = 10 c o s ( 2 π t ) . The desired position is given by θ d ( t ) = 4 s i n ( 3 t ) 3 s i n ( 2 t ) T . We set the adjustable parameters of the RBFNN as M 1 = 10 · J 1 , 3 , M 2 = 8 · J 1 , 3 , M 3 = 6 · J 1 , 3 , M 4 = 4 · J 1 , 3 , M 5 = 2 · J 1 , 3 , M 6 = 0 · J 1 , 3 , M 7 = M 5 , M 8 = M 4 , M 9 = M 3 , M 10 = M 2 , M 11 = M 1 , η = 5 , ζ 1 = ζ 2 = 1 , γ 1 = 0.03 , γ 2 = 0.04 , b = 0.3 . The parameters of the quasi-convex function, ρ and λ , are set to 10 and 0.0213 , respectively. For effective comparisons of control performance, this paper contains the results of the ASMC (12) using the convex function-based gain in [13], the proposed ASMC (13), and its specific case when Ξ ( t ) = α I 2 . The parameters of the convex function-based gain, ρ 2 and λ 2 , are set to 5 and 0.05 , respectively.

4.2. Simulation Results

Figure 3 presents the desired position trajectories and associated RBFNN outputs which are elements of the estimated matrix Ξ ( t ) . From Figure 3, it can be shown that the weight update law in (17) effectively prevents the RBFNN output from diverging. Figure 4 and Figure 5 illustrate that employing the estimated matrix Ξ ( t ) significantly reduces tracking errors compared to the case using a constant parameter α I 2 in the control input (13). In addition, Figure 4 and Figure 5 compare tracking performance of the proposed ASMC (13) when Ξ ( t ) = α I 2 with the ASMC (12) using the convex function-based gain in [13]. Notably, Figure 6 reveals that the convex function-based ASMC induces more frequent oscillations during the reaching phase. Additionally, the rapid convergence of the sliding variable is not only essential for improving tracking accuracy and disturbance rejection, but also plays a critical role in ensuring reliable and real-time control in practical systems, as emphasized in [23,24]. The reduced chattering phenomenon in the proposed control torque is clearly observed in Figure 7 and Figure 8.

5. Experiment

5.1. Experiment Setup

Figure 9 illustrates the 7-joint robotic manipulator, Franka Research 3, used in the experiment. The proposed controller was implemented using MATLAB/Simulink R2023b and executed on an Ubuntu 20.04 system. Real-time communication was established between MATLAB and the Franka Research 3 using the Franka Control Interface, enabling direct torque commands to be sent at each control cycle with a sampling rate of 1 kHz. Joint positions and velocities were measured using the internal encoders, while joint torques were obtained from the built-in torque sensors. The attached payload is 0.5 [kg], including the 0.338 [kg] weight of the water bottle. The attached payload generates highly irregular disturbances due to the movement of the water. Three control methods, referred as ASMC (12) using the convex function-based gain in [13], the proposed ASMC (13), and its specific case when Ξ ( t ) = α I 7 , are compared in the experiment. The initial pose of Franka Research 3 is θ 0 = 0 0.7854 0 2.3562 0 1.5708 0.7854 T and the desired position is set as follows.
θ d ( t ) = 0.7 π c o s ( π t ) 1.5 π c o s ( 2 3 π t ) 1.5 π c o s ( 2 3 π t ) 0.7 π c o s ( π t ) 0 0 0 T + θ 0 .
The center M j and the width η of the RBFNN are chosen to be the same as those used in the simulation. The control parameters are set as follows. H ¯ = 0.01 I 7 , 1 = 2 = 5 I 7 , L = 0.001 [s], ζ 1 = ζ 2 = ζ 3 = ζ 4 = ζ 5 = ζ 6 = ζ 7 = 1 , γ 1 = γ 2 = γ 3 = γ 4 = 0.4 , γ 5 = γ 6 = γ 7 = 0.2 , α = b = 0.4 . The parameters of the quasi-convex function-based gain (18), ρ and λ , are set to 10 and 0.255 , respectively. The parameters of the convex function-based gain in [13], ρ 2 and λ 2 , are set to 3 and 0.5 , respectively.

5.2. Experiment Results

Figure 10 presents the desired position trajectories along with the corresponding RBFNN outputs, which constitute the estimated matrix Ξ ( t ) . It is observed that the weight update law in (17) effectively prevents the RBFNN output from diverging, even under real environments. Figure 11 and Figure 12 demonstrate that using the estimated Ξ ( t ) leads to reduced tracking errors compared to the case using a fixed parameter α I 7 in the control input (13). Furthermore, Figure 11 and Figure 12 compare the tracking performance of the proposed ASMC when Ξ ( t ) = α I 7 with the ASMC (12) using the convex function-based gain in [13], showing that the proposed function contributes improved tracking performance. Notably, Figure 12 and Figure 13 reveal improved tracking performance with reduced chattering phenomenon of the proposed control torque (13).
Remark 4.
In addition to the simulation, the effectiveness of the proposed control method is verified through an experiment using a real 7-joint robotic manipulator, Franka Research 3. The proposed control method is tested with highly irregular disturbances caused by a water-filled payload during fast motion. The results demonstrate that the proposed control method is not only theoretically sound but also practical and robust for robotic manipulator applications in real environments.

6. Conclusions

This paper proposed the ASMC strategy tailored for robotic manipulators, featuring a quasi-convex function-based control gain and the TDE enhanced by NNs. To compensate for TDE errors, the proposed method utilized both the previous TDE error and the RBFNN with a weight update law that includes a damping term to prevent divergence. Additionally, a continuous gain that is quasi-convex function is proposed to replace the traditional switching control gain. This function preserved the continuous gain and smooth transitions between convex and concave characteristics depending on the magnitude of the sliding variable. As a result, the proposed gain effectively suppressed the chattering phenomenon caused by abrupt changes in the SMC gain and mitigated the overestimation problem associated with the convex function. The stability of the proposed control method is guaranteed in the sense of uniform ultimate boundedness, and its effectiveness was validated through both simulation and experiment results. In future work, the proposed ASMC method will be extended to complex robot systems.

Author Contributions

Conceptualization, J.W.L.; Funding acquisition, S.Y.L.; Investigation, J.W.L., J.M.R., S.G.P. and H.M.A.; Project administration, S.Y.L.; Supervision, M.K. and S.Y.L.; Validation, H.M.A.; Visualization, J.M.R. and S.G.P.; Writing—original draft, J.W.L. and S.Y.L.; Writing—review and editing, M.K. and S.Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Soonchunhyang University Research Fund. This work was supported by the faculty research fund of Sejong University in 2025.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, M.; Yu, L.; Wong, C.; Mineo, C.; Yang, E.; Bomphray, I.; Huang, R. A cooperative mobile robot and manipulator system (Co-MRMS) for transport and lay-up of fibre plies in modern composite material manufacture. Int. J. Adv. Manuf. Technol. 2022, 119, 1249–1265. [Google Scholar] [CrossRef]
  2. Li, C.; Yan, Y.; Xiao, X.; Gu, X.; Gao, H.; Duan, X.; Zuo, X.; Li, Y.; Ren, H. A Miniature Manipulator with Variable Stiffness Towards Minimally Invasive Transluminal Endoscopic Surgery. IEEE Robot. Autom. Lett. 2021, 6, 5541–5548. [Google Scholar] [CrossRef]
  3. Ghodsian, N.; Benfriha, K.; Olabi, A.; Gopinath, V.; Arnou, A. Mobile Manipulators in Industry 4.0: A Review of Developments for Industrial Applications. Sensors 2023, 23, 8026. [Google Scholar] [CrossRef] [PubMed]
  4. Mandil, W.; Rajendran, V.; Nazari, K.; Ghalamzan-Esfahani, A. Tactile-Sensing Technologies: Trends, Challenges and Outlook in Agri-Food Manipulation. Sensors 2023, 23, 7362. [Google Scholar] [CrossRef] [PubMed]
  5. Baek, J.; Jin, M.; Han, S. A New Adaptive Sliding-Mode Control Scheme for Application to Robot Manipulators. IEEE Trans. Ind. Electron. 2016, 63, 3628–3637. [Google Scholar] [CrossRef]
  6. Jin, M.; Kang, S.H.; Chang, P.H.; Lee, J. Robust Control of Robot Manipulators Using Inclusive and Enhanced Time Delay Control. IEEE/ASME Trans. Mechatronics 2017, 22, 2141–2152. [Google Scholar] [CrossRef]
  7. Lee, J.; Chang, P.H.; Seo, K.H.; Jin, M. Stable Gain Adaptation for Time-Delay Control of Robot Manipulators. IFAC-PapersOnLine 2019, 52, 217–222. [Google Scholar] [CrossRef]
  8. Park, J.; Kwon, W.; Park, P. An Improved Adaptive Sliding Mode Control Based on Time-Delay Control for Robot Manipulators. IEEE Trans. Ind. Electron. 2023, 70, 10363–10373. [Google Scholar] [CrossRef]
  9. Young, K.; Utkin, V.; Ozguner, U. A control engineer’s guide to sliding mode control. IEEE Trans. Control Syst. Technol. 1999, 7, 328–342. [Google Scholar] [CrossRef]
  10. Utkin, V.; Guldner, J.; Shi, J. Sliding Mode Control in Electro-Mechanical Systems; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  11. Utkin, V.I.; Poznyak, A.S. Adaptive sliding mode control. In Advances in Sliding Mode Control: Concept, Theory and Implementation; Springer: Berlin/Heidelberg, Germany, 2013; pp. 21–53. [Google Scholar]
  12. Obeid, H.; Fridman, L.M.; Laghrouche, S.; Harmouche, M. Barrier Function-Based Adaptive Sliding Mode Control. Automatica 2018, 93, 540–544. [Google Scholar] [CrossRef]
  13. Song, J.; Zuo, Z.; Basin, M. New Class K Function-Based Adaptive Sliding Mode Control. IEEE Trans. Autom. Control 2023, 68, 7840–7847. [Google Scholar] [CrossRef]
  14. Feng, H.; Song, Q.; Ma, S.; Ma, W.; Yin, C.; Cao, D.; Yu, H. A new adaptive sliding mode controller based on the RBF neural network for an electro-hydraulic servo system. ISA Trans. 2022, 129, 472–484. [Google Scholar] [CrossRef] [PubMed]
  15. Fei, J.; Wang, Z.; Pan, Q. Self-Constructing Fuzzy Neural Fractional-Order Sliding Mode Control of Active Power Filter. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 10600–10611. [Google Scholar] [CrossRef] [PubMed]
  16. Fei, J.; Ding, H. Adaptive sliding mode control of dynamic system using RBF neural network. Nonlinear Dyn. 2012, 70, 1563–1573. [Google Scholar] [CrossRef]
  17. Hu, J.; Zhang, D.; Wu, Z.G.; Li, H. Neural network-based adaptive second-order sliding mode control for uncertain manipulator systems with input saturation. ISA Trans. 2023, 136, 126–138. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, X.; Wang, H.; Tian, Y.; Peyrodie, L.; Wang, X. Model-free based neural network control with time-delay estimation for lower extremity exoskeleton. Neurocomputing 2018, 272, 178–188. [Google Scholar] [CrossRef]
  19. Han, S.; Wang, H.; Tian, Y.; Christov, N. Time-delay estimation based computed torque control with robust adaptive RBF neural network compensator for a rehabilitation exoskeleton. ISA Trans. 2020, 97, 171–181. [Google Scholar] [CrossRef] [PubMed]
  20. Utkin, V.I. Sliding Modes in Control and Optimization; Communications and Control Engineering Series; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  21. Khalil, H.K.; Grizzle, J.W. Nonlinear Systems; Prentice Hall: Upper Saddle River, NJ, USA, 2002; Volume 3. [Google Scholar]
  22. Su, J.; Wang, L.; Liu, C.; Qiao, H. Robotic Inserting a Moving Object Using Visual-Based Control with Time-Delay Compensator. IEEE Trans. Ind. Inform. 2024, 20, 1842–1852. [Google Scholar] [CrossRef]
  23. Xu, H.; Yu, D.; Wang, Z.; Cheong, K.H.; Chen, C.L.P. Nonsingular Predefined Time Adaptive Dynamic Surface Control for Quantized Nonlinear Systems. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 5567–5579. [Google Scholar] [CrossRef]
  24. Yang, Y.; Sui, S.; Liu, T.; Philip Chen, C.L. Adaptive Predefined Time Control for Stochastic Switched Nonlinear Systems with Full-State Error Constraints and Input Quantization. IEEE Trans. Cybern. 2025, 55, 2261–2272. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram illustrating the structure of the proposed methods.
Figure 1. Block diagram illustrating the structure of the proposed methods.
Sensors 25 04252 g001
Figure 2. Comparison of class K functions.
Figure 2. Comparison of class K functions.
Sensors 25 04252 g002
Figure 3. Desired position trajectories and associated RBFNN outputs. (a) Desired positions. (b) RBFNN outputs.
Figure 3. Desired position trajectories and associated RBFNN outputs. (a) Desired positions. (b) RBFNN outputs.
Sensors 25 04252 g003
Figure 4. Tracking errors and their enlarged views in the simulation compared with [13] (Song et al., 2023). (a) e 1 ( t ) . (b) Enlarged view of e 1 ( t ) . (c) e 2 ( t ) . (d) Enlarged view of e 2 ( t ) .
Figure 4. Tracking errors and their enlarged views in the simulation compared with [13] (Song et al., 2023). (a) e 1 ( t ) . (b) Enlarged view of e 1 ( t ) . (c) e 2 ( t ) . (d) Enlarged view of e 2 ( t ) .
Sensors 25 04252 g004
Figure 5. Sliding variables and their enlarged views in the simulation compared with [13] (Song et al., 2023). (a) σ 1 ( t ) . (b) Enlarged view of σ 1 ( t ) . (c) σ 2 ( t ) . (d) Enlarged view of σ 2 ( t ) .
Figure 5. Sliding variables and their enlarged views in the simulation compared with [13] (Song et al., 2023). (a) σ 1 ( t ) . (b) Enlarged view of σ 1 ( t ) . (c) σ 2 ( t ) . (d) Enlarged view of σ 2 ( t ) .
Sensors 25 04252 g005aSensors 25 04252 g005b
Figure 6. Sliding variables during reaching phase in the simulation compared with [13] (Song et al., 2023). (a) σ 2 ( t ) . (b) σ 2 ( t ) .
Figure 6. Sliding variables during reaching phase in the simulation compared with [13] (Song et al., 2023). (a) σ 2 ( t ) . (b) σ 2 ( t ) .
Sensors 25 04252 g006
Figure 7. Control torques in the simulation compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2.
Figure 7. Control torques in the simulation compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2.
Sensors 25 04252 g007
Figure 8. Control torques during reaching phase in the simulation compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2.
Figure 8. Control torques during reaching phase in the simulation compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2.
Sensors 25 04252 g008
Figure 9. Franka Research 3 and control environments.
Figure 9. Franka Research 3 and control environments.
Sensors 25 04252 g009
Figure 10. Desired position trajectories and associated RBFNN outputs. (a) Desired position trajectories. (b) RBFNN outputs.
Figure 10. Desired position trajectories and associated RBFNN outputs. (a) Desired position trajectories. (b) RBFNN outputs.
Sensors 25 04252 g010
Figure 11. Tracking errors in the experiment compared with [13] (Song et al., 2023). (a) e 1 ( t ) . (b) e 2 ( t ) . (c) e 3 ( t ) . (d) e 4 ( t ) .
Figure 11. Tracking errors in the experiment compared with [13] (Song et al., 2023). (a) e 1 ( t ) . (b) e 2 ( t ) . (c) e 3 ( t ) . (d) e 4 ( t ) .
Sensors 25 04252 g011
Figure 12. Sliding variables in the experiment compared with [13] (Song et al., 2023). (a) σ 1 ( t ) . (b) σ 2 ( t ) . (c) σ 3 ( t ) . (d) σ 4 ( t ) .
Figure 12. Sliding variables in the experiment compared with [13] (Song et al., 2023). (a) σ 1 ( t ) . (b) σ 2 ( t ) . (c) σ 3 ( t ) . (d) σ 4 ( t ) .
Sensors 25 04252 g012
Figure 13. Comparison of control torques in the experiment compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2. (c) Control torque of joint 3. (d) Control torque of joint 4.
Figure 13. Comparison of control torques in the experiment compared with [13] (Song et al., 2023). (a) Control torque of joint 1. (b) Control torque of joint 2. (c) Control torque of joint 3. (d) Control torque of joint 4.
Sensors 25 04252 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, J.W.; Rho, J.M.; Park, S.G.; An, H.M.; Kim, M.; Lee, S.Y. Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators. Sensors 2025, 25, 4252. https://doi.org/10.3390/s25144252

AMA Style

Lee JW, Rho JM, Park SG, An HM, Kim M, Lee SY. Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators. Sensors. 2025; 25(14):4252. https://doi.org/10.3390/s25144252

Chicago/Turabian Style

Lee, Jin Woong, Jae Min Rho, Sun Gene Park, Hyuk Mo An, Minhyuk Kim, and Seok Young Lee. 2025. "Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators" Sensors 25, no. 14: 4252. https://doi.org/10.3390/s25144252

APA Style

Lee, J. W., Rho, J. M., Park, S. G., An, H. M., Kim, M., & Lee, S. Y. (2025). Improved Adaptive Sliding Mode Control Using Quasi-Convex Functions and Neural Network-Assisted Time-Delay Estimation for Robotic Manipulators. Sensors, 25(14), 4252. https://doi.org/10.3390/s25144252

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop