Next Article in Journal
Real-Time Gain Scheduling Controller for Axial Piston Pump Based on LPV Model
Previous Article in Journal
Stability Optimization of an Oil Sampling Machine Control System Based on Improved Sparrow Search Algorithm PID
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decentralized Sliding Mode Control for Large-Scale Systems with Actuator Failures Using Dynamic Event-Triggered Adaptive Dynamic Programming

by
Yuling Liang
1,*,
Xiao Mao
1,
Kun Zhang
2,
Lei Liu
3,
He Jiang
4 and
Xiangmin Chen
5
1
School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China
2
School of Astronautics, Beihang University, Beijing 100191, China
3
School of Science, Liaoning University of Technology, Jinzhou 121000, China
4
School of Renewable Energy, Shenyang Institute of Engineering, Shenyang 110136, China
5
Department of Information and Control Engineering, Shenyang Institute of Science and Technology, Shenyang 110167, China
*
Author to whom correspondence should be addressed.
Actuators 2025, 14(9), 420; https://doi.org/10.3390/act14090420
Submission received: 26 July 2025 / Revised: 25 August 2025 / Accepted: 26 August 2025 / Published: 28 August 2025
(This article belongs to the Section Control Systems)

Abstract

This study develops a new integral sliding mode-based method to address the decentralized adaptive fault-tolerant guaranteed cost control (GCC) problem via a dynamic event-triggered (DET) adaptive dynamic programming (ADP) approach. Firstly, integral sliding mode control technology is applied to eliminate the influence of actuator faults, which can guarantee that the large-scale system states stay on the sliding mode surface. Secondly, the ADP algorithm based on DET mode is employed to improve the control performance for equivalent sliding mode surface and reduce computational and communication overhead. Meanwhile, the GCC method is introduced to ensure that the performance cost function is less than an upper bound while maintaining system stability. Then, through Lyapunov stability analysis, it is proven that the presented DET-GCC method based on ADP algorithm can guarantee that all signals are uniformly ultimately bounded. Finally, the validity of the developed approach is confirmed through the simulation results.

1. Introduction

Over the past few years, large-scale (LS) system models have been widely studied in unmanned aerial vehicle (UAV) systems, smart transportation, microgrid systems and other fields [1,2,3]. These systems are usually composed of multiple low-dimensional subsystems with complex dynamic coupling relationships. However, the traditional centralized control method cannot effectively deal with the interaction between subsystems, which threatens the stability of the whole system [4]. Consequently, decentralized control approaches have been widely concerned, which mainly use local state information for control design, rather than relying on the global state information of the whole system [5]. In [6], a decentralized robust optimal control (OC) method was used for LS systems subject to constrained structural variations, and its superior performance was proved in power systems. In [7], a neural network (NN)-based decentralized adaptive control approach is put forward to tackle the control issue of nonlinear large-scale (NLS) systems. In [8], a novel decentralized control approach for discrete-time LS systems was presented, which ensures the passivity properties of subsystems by locally solving semi-definite programs, thereby achieving asymptotic stability for the entire system.
The OC strategy for LS systems optimizes multiple performance indicators by adjusting control parameters [9,10]. Different from the adaptive control strategy [11,12], the adaptive dynamic programming (ADP) method, which learned future optimal strategies based on current states and actions, was introduced to solve OC problems [13,14,15]. In [16], a novel ADP approach was designed to address the OC issue of unknown LS systems with application to the fuel control system for turbofan engines. In [17], a decentralized ADP-based OC approach was developed to address the control issue for multi-agent systems. In [18], for the nonlinear system with state constraints, the authors presented a new ADP approach for tackling OC issues. In recent years, the ADP algorithm has effectively handled uncertainties in NLS systems with matched and unmatched interconnections while offering an approach for tackling decentralized OC issues. In [19], ADP methodology was employed to address the decentralized OC issue in NLS systems subject to unknown time delays and mismatched interconnections, for which a novel cost function was developed for converting decentralized control issues into an OC problem. In [20], by solving a series of Hamilton–Jacobi–Bellman equations for auxiliary subsystems, a decentralized NN control scheme was proposed via the ADP algorithm. In [21], a synchronous value iteration algorithm based on the ADP method was formulated for the restricted optimization issue of uncertain NLS systems.
On the other hand, researchers have utilized an ADP-based sliding mode control (SMC) scheme to address the fault-tolerant control (FTC) issue, which can not only eliminate the impact of actuator faults, but also guarantee OC performance of sliding-mode dynamics (SMD) [22,23,24,25]. In [22], a comprehensive control method combining ADP algorithm and global SMC was formulated to resolve the orientation control issue for flexible spacecraft subject to actuator failure. In [23], a novel off-line optimal SMC method based on an ADP algorithm was presented to handle the FTC problem for cascade nonlinear systems with mismatched uncertainties. In [24], an ADP-based sliding mode FTC method was proposed with an optimizer for OC parameters. The integral SMC in this method adds an integral term to traditional SMC to improve anti-interference. In [25], a FTC method combining fuzzy integral nonsingular terminal SMC and ADP was proposed to enhance launch vehicle flight safety in the event of actuator failure. However, the control methods in these studies have limitations in guaranteeing control performance when the system is under complex fault scenarios. To enhance the robustness for the system against uncertainty, guaranteed cost control (GCC) has been introduced in [26], with the goal of strictly ensuring that the performance cost index does not exceed the upper bound while satisfying system stability. Over recent years, the GCC strategy has been further utilized with more complex system models and scenarios such as the study in [27] on the finite-time GCC for LS singular systems subject to interconnected state lags. In [28], a quantitative guaranteed cost FTC approach was developed for addressing the control issue of unknown multi-input systems with actuator failures.
An event-triggered control (ETC) method can reduce communication and computing resource consumption compared to traditional time-triggered control mode [29,30,31], as it updates the controller strategies only when preset conditions are violated. Static ETC approach optimizes resources for decentralized control of NLS systems by using fixed triggering conditions. In [32], a hierarchical sliding mode surface (SMS)-based ADP algorithm combined with the ETC approach was designed to tackle the OC issue for switched nonlinear systems. In [33], a decentralized ETC strategy according to the ADP method was proposed for NLS systems to reduce communication costs and computing burden. However, compared with static ETC, dynamic ETC has higher resource utilization efficiency and lower communication and computing burden. In [34], a novel ADP method under dynamic ETC mode was developed by using only agent interaction information without needing model dynamics, thus addressing the problem of resolving algebraic Riccati equations. Reference [35] presented a new decentralized control scheme for certain continuous-time interconnected NLS systems with interconnection terms and unmodeled internal dynamics by integrating the dynamic ETC and ADP method. However, these systems do not contain failures when the operating system operates normally.
Inspired by the above discussions, this paper presents an SMC strategy based on the dynamic event-triggered ADP method for LS systems with actuator faults. The main innovations are as follows:
1.
This article proposes an FTC scheme for NLS systems with actuator faults using integral sliding mode control (ISMC) technology, which eliminates the effect of actuator faults and achieves stable SMD.
2.
This article presents a GCC approach based on the ADP algorithm, which ensures that the motion trajectories of the system stay on the SMS, while the overall performance index is less than a certain upper bound.
3.
Different from the ETC methods in [31,32,33], to improve the utilization of communication resources of the equivalent SMD, a decentralized dynamic ETC approach combined with the ADP method is designed to optimize the control performance of NLS systems.
The subsequent sections of this article is arranged as follows. Section 2 presents the issue description. Section 3 develops an FTC approach utilizing ISMC methodology. Section 4 develops a dynamic ETC scheme based on GCC via the ADP method. Section 5 proposes a simulation example for the corresponding subsystem to verify the strategy’s performance. Section 6 presents a conclusion of the present study.

2. Problem Statement

The uncertain NLS systems are as follows:
ξ ˙ i = g i ( ξ i ) + ω i ( ξ i ) σ i ( t ) + β i ( t t T ) f s i ( t ) + ε i ( ξ i ) ϱ i ( ξ ) ,
where the control input is denoted as σ i ( t ) R m i for the ith subsystem, ξ i R n i represents the system state vector, the actuator abrupt fault occurring at time t T is represented by β i ( t t T ) f s i ( t ) . Here, β i ( t t T ) = 1 for t t T and β i ( t t T ) = 0 for t < t T . Assume that f s i ( t ) is continuous with f ˙ s i ( t ) being bounded by f ˙ s i ( t ) b F , g i ( ξ i ) R n i and ω i ( ξ i ) R n i × m i denote system functions that satisfy local Lipschitz continuity on Ω i R n i , ω i ( ξ i ) is a smooth known function, which is different from g i ( ξ i ) . Moreover, ϱ i ( ξ ) R r i is the uncertain interconnection and ε i ( ξ i ) R n i × r i is a disturbance function that shows the architecture of the uncertain term, where ξ = [ ξ 1 T , ξ 2 T , , ξ N T ] T R n denotes the state for NLS systems (1), with n = i = 1 N n i .
Further, the nominal version of system (1) is
ξ ˙ i ( t ) = ω i ( ξ i ( t ) ) σ i ( t ) + g i ( ξ i ( t ) ) .
Assumption 1.
Assume that g i + ω i [ σ i + β i ( t t T ) f s i ] satisfies the Lipschitz continuity condition within a domain Ω R n i and suppose that system (2) is controllable.
Here, the matched uncertain term ε i ( ξ i ) ϱ i ( ξ ) satisfies
ϱ i T ( ξ ) ϱ i ( ξ ) ϱ i M T ( ξ ) ϱ i M ( ξ ) .
Here, ϱ i ( · ) R r i , i = 1 , 2 , , N is an uncertain function with ϱ i ( 0 ) = 0 , and ϱ i M ( · ) R r i represents a known constrained function where ϱ i M ( 0 ) = 0 .
This research aims to construct an adaptive control strategy that enables the NLS systems to remain stable when actuator faults occur, and there exists an upper bound of the performance index for the system dynamics on the SMS. Additionally, a dynamic ETC mechanism is developed to enhance communication resource utilization efficiency.

3. Design of FTC Strategy Approach Based on ISMC Technique

To eliminate the influence of time-varying faults in NLS systems (1), we develop a SMC approach as follows:
σ i = σ 0 i + σ 1 i
with σ 0 i being the nominal control for guaranteed dynamic ETC performance, and σ 1 i being the control law for fault accommodation, which is utilized to eliminate the influence of actuator failures.
Assumption 2.
The system functions ω ( ξ ) and ε i ( ξ i ) are bounded by | | ω ( ξ ) | | ω max and ε i ( ξ i ) ε max , where ε max and ω max are positive constants. Additionally, ω i ( ξ i ) represents a full column rank matrix.
Remark 1.
The guaranteed cost controller the under dynamic ETC method takes the form σ i * = σ 0 i * + σ 1 i , ensuring that nonlinear system (1) achieves stability while simultaneously optimizing the adjusted cost functional during sliding mode operation.
The ISMC function is formulated as
S i ( ξ i ) = S 0 i ( ξ i ) S 0 i ( ξ 0 i ) 0 t T i ( ξ i ) ( g i ( ξ i ) + ω i ( ξ i ) σ 0 i + ε i ( ξ i ) ϱ i ( ξ ) d τ .
where S 0 i ( ξ i ) R m × 1 and T i ( ξ i ) = S 0 i ( ξ i ) / ξ i .
In light of (5), it has
S ˙ i ( ξ i ) = T i ( ξ i ) ω i ( ξ i ) ( σ 1 i + β i ( t t T ) f s i ( t ) ) .
To ensure the system states remain on the SMS S i ( ξ i , t ) = 0 , σ 1 i is designed as
σ 1 i = α sgn ( ω i T ( ξ i ) T i T ( ξ i ) S i ( ξ i ) )
where α and s g n ( · ) represent a gain and a symbolic function, respectively.
sgn ( i ) = 1 i > 0 0 i = 0 1 i < 0 .
where sgn ( i ) = sgn ( i 1 ) , , sgn ( i m i ) R m i , and i = ω i T ( ξ i ) T i T ( ξ i ) S i ( ξ i ) .
Theorem 1.
With regard to system (1), when the SMC law is designed based on (6) and α > β ¯ i | | f s i ( t ) | | , S i ( ξ i ) = 0 is achievable from the beginning, where β ¯ i denotes the upper bound of β i ( t t T ) .
Proof. 
Define the Lyapunov candidate:
L s i = 1 2 S i ( ξ i ) T ( ξ i ) S i ( ξ i ) .
According to (6), we have
L ˙ s i = S i ( ξ i ) T T i ( ξ i ) ω i ( ξ i ) [ σ 1 i + β i ( t t T ) f s i ( t ) ] .
According to the control rule (7) and (8), we have
L ˙ s i = S i ( ξ i ) T T i ( ξ i ) ω i ( ξ i ) [ α sgn ( ω i T ( ξ i ) T i T ( ξ i ) S i ( ξ i ) ) + β i ( t t T ) f s i ( t ) ] = α S i T ( ξ ) T i ( ξ i ) ω i ( ξ i ) 1 + S i T ( ξ i ) T i ( ξ i ) ω i ( ξ i ) β i ( t t T ) f s i ( t ) α S i T ( ξ i ) T i ( ξ i ) ω i ( ξ i ) 1 + β ¯ i | | f s i ( t ) | | S i T ( ξ i ) T i ( ξ i ) ω i ( ξ i ) 1 = ( α β ¯ i | | f s i ( t ) | | ) S i T ( ξ i ) T i ( ξ i ) ω i ( ξ i ) 1
where β i ( t t T ) f s i ( t ) β ¯ i | | f s i ( t ) | | , and if the condition α > β ¯ i | | f s i ( t ) | | is satisfied, and L ˙ s i ( t ) 0 for any S i ( ξ i ) 0 . Hence, the SMS S i ( ξ i ) = 0 achieves asymptotic stability.
The proof is completed. □
The objective of employing ISMC methodology is to ensure that S ˙ i ( ξ i ) = 0 , hence the equivalent control input takes the form σ 1 e q i = β i ( t t T ) f s i ( t ) . Substituting σ 1 e q i = β i ( t t T ) f s i ( t ) to (1), the SMD is
ξ ˙ i ( t ) = g i ( ξ i ( t ) ) + ω i ( ξ i ( t ) ) σ 0 i ( t ) + ε i ( ξ i ) ϱ i ( ξ ) .
The nominal form of uncertain SMD (12) is
ξ ˙ i ( t ) = ω i ( ξ i ( t ) ) σ 0 i ( t ) + g i ( ξ i ( t ) ) .
Denote ξ i ( 0 ) = ξ i 0 . The control input is σ = [ σ 1 , σ 2 , , σ N ] . Define the cost function as
Γ ¯ ( ξ 0 , σ 0 i ) = i = 1 N t Z i ( ξ i ( τ ) , σ 0 i ( τ ) ) d τ
with ξ 0 = [ ξ 10 , ξ 20 , , ξ N 0 ] , Z i ( ξ i , σ 0 i ) = ξ i Q i ξ i + σ 0 i R i σ 0 i , R i = R i > 0 and Q i = Q i > 0 being constant matrices.
To address the decentralized cost-constrained control issue, a feedback control law is denoted by σ 0 ( ξ ) = [ σ 01 ( ξ 1 ) , σ 02 ( ξ 2 ) , , σ 0 N ( ξ N ) ] and obtained a bounded cost functional J ( σ ) , i.e., satisfying J ( σ ) J M < + , where J M represents a positive number that guarantees the stability of the system, and the cost function (14) satisfies the constraint Γ ¯ J .
It should be noted that the control system allows for a more concise representation. To offer a clearer illustration for the system, we have listed the following formula
g ( ξ ) = [ g 1 ( ξ 1 ) , g 2 ( ξ 2 ) , , g N ( ξ N ) ] , ω ( ξ ) = diag { ω 1 ( ξ 1 ) , ω 2 ( ξ 2 ) , , ω N ( ξ N ) } ,
and
ϱ ( ξ ) = ϱ 1 T ( ξ ) , ϱ 2 T ( ξ ) , , ϱ N T ( ξ ) ) , ε ( ξ ) = diag { ε 1 ( ξ 1 ) , ε 2 ( ξ 2 ) , , ε N ( ξ n ) } , ϱ M ( ξ ) = ϱ 1 M T ( ξ ) , ϱ 2 M T ( ξ ) , , ϱ N M T ( ξ ) ) .
Thus, the nonlinear system (12) becomes
ξ ˙ ( t ) = ω ( ξ ) σ 0 + ε ( ξ ) ϱ ( ξ ) + g ( ξ )
where σ 0 is the control for guaranteed dynamic ETC performance.
The nominal version of system (15) is
ξ ˙ = ω ( ξ ) σ 0 + g ( ξ )
The cost function (14) becomes
Γ ¯ ( ξ 0 , σ 0 ) = 0 Z ( ξ ( τ ) , σ 0 ( τ ) ) d τ
where Z ( ξ , σ 0 ) = ξ Q ξ + σ 0 R σ 0 , with the diagonal matrix R = diag { R 1 , R 2 , , R N } and Q = diag { Q 1 , Q 2 , , Q N } being constant matrices.

4. Dynamic Event-Triggered Guaranteed Cost Control Using ADP

Here, we present the innovation of a control law σ 0 , which is designed by using the ADP algorithm under the dynamic ETC approach to achieve near-optimal control strategies.
Lemma 1 is introduced to demonstrate that the cost function for the NLS system (15) can be well-defined, with its proof provided in [36].
Lemma 1.
Suppose there exist a bounded positive function J ( ξ ) , a cost function V ( ξ ) with V ( ξ ) > 0 , and a control input σ 0 ( ξ ) , so that
( V ( ξ ) ) T ε ( ξ ) ϱ ( ξ ) J ( ξ ) ; Z ( ξ , σ 0 ) + ( V ( ξ ) ) T ( ω ( ξ ) σ 0 + g ( ξ ) ) + J ( ξ ) = 0 ,
with V V / ξ . Therefore, the local asymptotic stability of system (16) can be verified within a specific domain around the origin. Furthermore, it follows that Γ ¯ ( ξ 0 , σ 0 ) V ( ξ 0 , σ 0 ) = Γ ( ξ 0 , σ 0 ) , with Γ ( ξ 0 , σ 0 ) representing the adjusted cost function for system (16), is given by
Γ ( ξ 0 , σ 0 ) = 0 { Z ( ξ , σ 0 ( ξ ) ) + J ( ξ ) } d τ .
Remark 2.
Referring to [37,38,39], the bounded function J ( ξ ) is
J ( ξ ) = 1 4 ( V ( ξ ) ) T ε ( ξ ) ε T ( ξ ) V ( ξ ) + ϱ M T ( ξ ) ϱ M ( ξ ) ,
where J ( ξ ) is designed to handle the coupling and uncertain terms of the system.
Considering system (16), the adjusted cost function is defined as
V ( ξ 0 ) = 0 { Z ( ξ , σ 0 ) + J ( ξ ) } d τ .
Based on (21) for system (16), the Hamiltonian function is
H ( ξ , σ 0 , V ( ξ ) ) = Z ( ξ , σ 0 ) + ( V ( ξ ) ) T ( ω ( ξ ) σ 0 + g ( ξ ) ) + J ( ξ ) .
Moreover, the optimal guaranteed cost for the NLS system (16) is Γ * ( ξ 0 ) = min σ 0 A ( Ω )   Γ ( ξ 0 , σ 0 ) , where Γ ( ξ 0 , σ 0 ) is provided in (19) and A ( Ω ) represents the admissible control set. In the OC design, Γ * ( ξ ) satisfies the HJB equation
0 = min d 0 A ( Ω ) H ( ξ , σ 0 , Γ * ( ξ ) ) .
According to the equilibrium conditions, the OC policy is derived as
σ 0 * ( ξ ) = 1 2 R 1 ω T ( ξ ) Γ * ( ξ ) .
The modified HJB equation is formulated as
0 = Z ( ξ , σ 0 * ) + ( Γ * ( ξ ) ) T ( ω ( ξ ) σ 0 * + g ( ξ ) ) + ϱ M T ( ξ ) ϱ M ( ξ ) + 1 4 ( Γ * ( ξ ) ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) , Γ * ( 0 ) = 0 .
According to (24) and (25), we have
0 = ξ T Q ξ + 1 4 Γ * ( ξ ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) Γ * ( ξ ) T g ( ξ ) ϱ M T ( ξ ) ϱ M ( ξ ) 1 4 Γ * ( ξ ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) .
Consider the NLS system (15) with the modified index (17), and assume that the solution to the HJB equation (25) exists and is continuous. Denote this solution as Γ * ( ξ ) . Therefore, the optimal guaranteed cost is determined when σ 0 = σ 0 * , i.e., J ( σ 0 * ) = Γ * ( ξ 0 ) . Consequently, it follows that J * = min σ 0 J ( σ 0 ) = Γ * ( ξ 0 ) and σ 0 ¯ * = arg min σ 0 J ( σ 0 ) = σ 0 * . Therefore, the optimal GCC of the original NLS system (15) can be transformed into solving an OC issue for the nominal system.

4.1. Dynamic ETC-Based ADP Method Design

Initially, establish a strictly monotonically increasing sequence { t j } j = 0 where j N and t 0 = 0 , comprising the activation times t j . For any t [ t j , t j + 1 ) , denote the system state as ξ ¯ j = ξ ( t j ) .
The triggering error is given by
e j ( t ) = ξ ¯ j ξ ( t ) .
The corresponding control is described by
σ 0 ( t ) = σ 0 ( ξ ¯ j ) .
According to (27), the dynamic ETC strategy is expressed as
σ 0 ( ξ ¯ j ) = σ 0 ( e j ( t ) + ξ ( t ) ) .
Based on (24), the optimal dynamic ETC strategy is obtained as
σ 0 * ( ξ ¯ j ) = 1 2 R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) , t [ t j , t j + 1 )
where Γ * ( ξ ¯ j ) = Γ * ( ξ ) ξ ξ = ξ ¯ j .
Subsequently, the HJB equation (25) can be rewritten as
H ( ξ , σ 0 * ( ξ ¯ j ) , Γ * ( ξ ) ) = ξ T Q ξ + 1 4 ( Γ * ( ξ ¯ j ) ) T ω ( ξ ¯ j ) R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) + Γ * ( ξ ) ) T g ( ξ ) 1 2 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) + ϱ M T ( ξ ) ϱ M ( ξ ) + 1 4 ( Γ * ( ξ ) ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) .
Assumption 3.
Suppose that the Lipschitz condition holds for σ 0 * ( ξ ) such that ρ e j ( t ) σ 0 ( ξ ( t ) ) σ 0 ( ξ ¯ j ) , where ρ represents a positive constant.
Assumption 4.
The cost function Γ * ( ξ ) is bounded by Γ M in the compact set Ξ R . Moreover, Γ * ( ξ ) Γ C M holds with Γ C M as a positive constant.
Theorem 2.
Considering the NLS system (15) and (16), if Assumptions 3 and 4 hold and the solution of the HJB equation (23) exists, the dynamic ETC near-OC signal is designed as (29) when the triggering condition is presented as
e j 2 β 1 σ ˘ 1 η 2 λ min ( Q ) ξ 2 r ρ 2 e T ,
where 0 < η < 1 , β 1 > 0 , r denotes the largest singular value for matrix R and e T is the triggering threshold. The dynamics of internal signal σ ˘ 1 are denoted as
σ ˘ ˙ 1 = β 1 σ ˘ 1 + λ min ( Q ) ξ 2
Proof. 
For system (15), the optimal cost function Γ * ( ξ ( t ) ) is positive definite, while σ 0 * ( ξ ¯ j ) denotes the dynamic ETC strategy. According to (15), selecting V ( t ) = V 1 + V 2 = Γ * ( ξ ( t ) ) + σ ˘ j as the Lyapunov function, it obtains
V ˙ ( ξ ( t ) ) = σ ˘ ˙ 1 + ( Γ * ( ξ ) ) T ( ω ( ξ ) σ 0 * ( ξ ¯ j + g ( ξ ) ) ) + ε ( ξ ) ϱ ( ξ ) .
According to (25), it follows that
Γ * ( ξ ) T g ( ξ ) = ( σ 0 * ) T R σ 0 * ξ T Q ξ + 1 2 Γ * ( ξ ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) ϱ M T ( ξ ) ϱ M ( ξ ) 1 4 Γ * ( ξ ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) .
Bringing (34) into (33), it yields
V ˙ ( ξ ( t ) ) = ξ T Q ξ ( σ 0 * ( ξ ) ) T R σ 0 * ( ξ ) + 1 2 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) ϱ M T ( ξ ) ϱ M ( ξ ) 1 4 ( Γ * ( ξ ) ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) + σ ˘ ˙ 1 + ( Γ * ( ξ ) ) T ω ( ξ ) σ 0 * ( ξ ¯ j ) + ( Γ * ( ξ ) ) T ε ( ξ ) ϱ ( ξ ) .
Using Yang’s inequality, it has
( Γ * ( ξ ) ) T ε ( ξ ) ϱ ( ξ ) 1 4 ( Γ * ( ξ ) ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) + ϱ T ( ξ ) ϱ ( ξ ) .
Since ϱ T ( ξ ) ϱ ( ξ ) ϱ M T ( ξ ) ϱ M ( ξ ) and (24), we have
V ˙ ( ξ ( t ) ) ξ T Q ξ + 1 4 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) + ( Γ * ( ξ ) ) T ω ( ξ ) σ 0 * ( ξ ¯ j ) + σ ˘ ˙ 1 η 2 λ min ( Q ) ξ 2 ( 1 η 2 ) λ min ( Q ) ξ 2 + σ ˘ ˙ 1 + 1 4 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) + ( Γ * ( ξ ) ) T ω ( ξ ) σ 0 * ( ξ ¯ j ) .
Based on (37), it has
V ˙ ( ξ ( t ) ) η 2 λ min ( Q ) ξ 2 ( 1 η 2 ) λ min ( Q ) ξ 2 + σ ˘ ˙ 1 + 1 4 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) 1 2 ( Γ * ( ξ ) ) T ω ( ξ ) R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) + 1 4 ( Γ * ( ξ ¯ j ) ) T ω ( ξ ¯ j ) R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) 1 4 ( Γ * ( ξ ¯ j ) ) T ω ( ξ ¯ j ) R 1 ω T ( ξ ¯ j ) Γ * ( ξ ¯ j ) .
Using (24), (29), and (38), we have
V ˙ ( ξ ( t ) ) η 2 λ min ( Q ) ξ 2 ( 1 η 2 ) λ min ( Q ) ξ 2 + σ ˘ ˙ 1 + ( σ 0 * ( ξ ) σ 0 * ( ξ ¯ j ) ) T R ( σ 0 * ( ξ ) σ 0 * ( ξ ¯ j ) ) .
Based on Assumption 3 and Equation (39), it has
( σ 0 * ( ξ ) σ 0 * ( ξ ¯ j ) ) T R ( σ 0 * ( ξ ) σ 0 * ( ξ ¯ j ) ) r ρ 2 e j 2
where r denotes the largest singular value for matrix R.
Bringing (40) into (39), it yields
V ˙ ( ξ ( t ) ) η 2 λ min ( Q ) ξ 2 ( 1 η 2 ) λ min ( Q ) ξ 2 + σ ˘ ˙ 1 + r ρ 2 e j 2 .
It is clear that the constraint (31) holds, and it follows that V ˙ ( ξ ( t ) ) η 2 λ min ( Q ) ξ 2 < 0 where ξ 0 .
The proof is finished. □
Theorem 3.
Considering (15) with the cost function (17), assume that a continuously differentiable solution Γ * ( ξ ) of HJB equation (23) exists. Then, the OC inputted under the dynamic ETC strategy is developed as (24), and the near-optimal guaranteed cost function (17) satisfies
Γ ¯ ( ξ ( 0 ) , σ 0 * ( ξ ¯ j ) ) Γ * ( ξ ( 0 ) ) + 0 σ 0 * ( ξ ¯ j ) σ 0 * ( ξ ) T R σ 0 * ( ξ ¯ j ) σ 0 * ( ξ ) d τ .
Additionally, set σ 0 * ( ξ ) = σ 0 * ( ξ ¯ j ) , and one gets
Γ ¯ ( ξ ( 0 ) , σ 0 * ( ξ ¯ j ) ) Γ * ( ξ ( 0 ) ) .
Proof. 
By applying a control input σ 0 ( ξ ) , it follows that
Γ ¯ ( ξ ( 0 ) , σ 0 ) = 0 ξ T Q ξ + σ 0 T R σ 0 + Γ ˙ * ( ξ ) d τ + Γ * ( ξ ( 0 ) ) .
Based on (15) and (34), we have
Z ( ξ , σ 0 ) + Γ ˙ * ( ξ ) = σ 0 T R σ 0 ( σ 0 * ) T R σ 0 * 1 4 Γ * ( ξ ) T ε ( ξ ) ε T ( ξ ) Γ * ( ξ ) ϱ M T ( ξ ) ϱ M ( ξ ) + Γ * ( ξ ) T ω ( ξ ) ( σ 0 σ 0 * ) + Γ * ( ξ ) T ε ( ξ ) ϱ ( ξ )
Due to (36), it obtains
Z ( ξ ( 0 ) , σ 0 ) + Γ ˙ * ( ξ ) σ 0 T R σ 0 ( σ 0 * ) T R σ 0 * + Γ * ( ξ ) T ω ( ξ ) ( σ 0 σ 0 * ) .
According to (24), it has
σ 0 T R σ 0 ( σ 0 * ) T R σ 0 * = ( σ 0 σ 0 * ) T R ( σ 0 σ 0 * ) + 2 ( σ 0 * ) T R ( σ 0 σ 0 * ) .
By substituting (47) into (46), we have
Z ( ξ ( 0 ) , σ 0 ) + Γ ˙ * ( ξ ) ( σ 0 σ 0 * ) T R ( σ 0 σ 0 * ) .
By invoking (44), (45), and (48), it obtains
Γ ¯ ( σ 0 , ξ ( 0 ) ) = 0 Z ( ξ ( 0 ) , σ 0 ) + Γ ˙ * ( ξ ) d τ + Γ * ( ξ ( 0 ) ) 0 ( σ 0 σ 0 * ) T R ( σ 0 σ 0 * ) d τ + Γ * ( ξ ( 0 ) ) .
Substituting σ 0 = σ 0 * ( ξ ¯ j ) results in Γ ¯ ( ξ ( 0 ) , σ 0 * ( ξ ¯ j ) ) Γ * ( ξ ( 0 ) ) + 0 σ 0 * ( ξ ¯ j ) σ 0 * ( ξ ) T   R σ 0 * ( ξ ¯ j ) σ 0 * ( ξ ) d τ .
Therefore, the proof is concluded. □

4.2. Neural Network-Based Realization

According to the universal approximation principle of NNs within a compact region Ω [40], Γ * ( ξ ) can be given by
Γ * ( ξ ) = δ c ( ξ ) + ϑ c T κ c ( ξ )
where δ c ( ξ ) R is the estimation error, ϑ c R n c represents the target weight for the critic NN, κ c ( ξ ) R n c denotes the activation function, and n c represents the node number of the hidden layer of the critic NN. For practical scenarios, the cost function is presented as
Γ ^ ( ξ ) = ϑ ^ c T κ c ( ξ )
with ϑ ^ c R n c being the real weight for the critic NN.
Assumption 5.
The NN weight vector ϑ c satisfies ϑ c ϑ ¯ c , and the activation function κ c ( ξ ( t ) ) satisfies κ c ( ξ ( t ) ) < κ c m and κ c ( ξ ( t ) ) / ξ κ ¯ c m . The residual approximation errors satisfy δ c ( ξ ) < δ c m and δ c ( ξ ) / ξ < δ ¯ c m , where κ c m , κ ¯ c m , δ ¯ c m , and δ c m are positive constants.
Using (50), one has
Γ * ( ξ ) = ( κ c ( ξ ) ) T ϑ c + δ c ( ξ )
where δ c ( ξ ) = δ c ( ξ ) / ξ and κ c ( ξ ) = κ c ( ξ ) / ξ .
Furthermore, it obtains
Γ ^ * ( ξ ) = ( κ c ( ξ ) ) T ϑ ^ c
Based on (29) and (52), the dynamic event-triggered OC law is
σ 0 * ( ξ ¯ j ) = 1 2 R 1 ω T ( ξ ¯ j ) δ c ( ξ ¯ j ) + κ c ( ξ ¯ j ) T ϑ c .
Subsequently, the actual control policy is
σ ^ 0 ( ξ ¯ j ) = 1 2 R 1 ω T ( ξ ¯ j ) κ c ( ξ ¯ j ) T ϑ ^ c .
From (25) and (29), the approximate Hamiltonian function is
H ξ , σ ^ 0 ( ξ ¯ j ) , ϑ ^ c = ( ( κ c ( ξ ) ) T ϑ ^ c ) T ( ω ( ξ ) σ ^ 0 ( ξ ¯ j ) + g ( ξ ) ) + Z ( ξ , σ ^ 0 ( ξ ¯ j ) ) + ϱ M T ( ξ ) ϱ M ( ξ ) + 1 4 ( Γ ^ * ( ξ ) ) T ε ( ξ ) ε T ( ξ ) ( Γ ^ * ( ξ ) ) e c .
The weight adaptation strategy for the critic NN is presented as
ϑ ^ ˙ c t r a = α c θ 1 + θ T θ 2 ϑ ^ c T θ + J ( ξ ) + Z ξ , σ ^ 0 ( ξ ¯ j ) ,
with θ = κ c ( ξ ) ( g ( ξ ) + ω ( ξ ) σ ^ 0 ( ξ ¯ j ) ) and α c being the update rate of the critic NN. Utilizing (53), one gets
J ( ξ ) = 1 4 ϑ ^ c T κ c ( ξ ) ε ( ξ ) ε T ( ξ ) κ c T ( ξ ) ϑ ^ c + ϱ M T ( ξ ) ϱ M ( ξ ) .
In the proposed approach, to relax the persistence of excitation condition, an enhanced weight adjustment strategy for the critic NN is
ϑ ^ ˙ c = [ ( α c θ ) / ( θ T θ + 1 ) 2 ] [ ϑ ^ c T θ + J ( ξ ) + Z ( ξ , σ ^ 0 ( ξ ¯ j ) ) ] k = 1 l c α c θ ( k ) ( 1 + θ ( k ) T θ ( k ) ) 2 [ ϑ ^ c T θ ( k ) + J ( ξ ( t k ) ) + Z ( ξ ( t k ) , σ ^ 0 ( ξ ¯ j ) ) ] ,
with k { 1 , 2 , , l c } being the index of stored data ξ ( t k ) , t k [ t j , t j + 1 ) , j N and θ ( k ) = κ c ( ξ ( t k ) )   ( g ( ξ ( t k ) ) + ω ( ξ ( t k ) ) σ ^ 0 ( ξ ¯ j ) ) , and
J ( ξ ( t k ) ) = 1 4 ϑ ^ c T κ c ( ξ ( t k ) ) ε ( ξ ( t k ) ) ε T ( ξ ( t k ) ) κ c T ( ξ ( t k ) ) ϑ ^ c + ϱ M T ( ξ ( t k ) ) ϱ M ( ξ ( t k ) ) .
Define the weights estimation error as
ϑ ˜ c = ϑ c ϑ ^ c .
Based on (59) and (61), it follows that
ϑ ˜ ˙ c = α c ψ ψ T + k = 1 l c ψ ( k ) ψ ( k ) T ϑ ˜ c + α c ψ 1 + θ T θ δ H + k = 1 l c α c ψ ( k ) 1 + θ ( k ) T θ ( k ) δ H ( k ) , t , t k [ t j , t j + 1 )
with δ H = δ c T ( ξ ) [ g ( ξ ) + ω ( ξ ) σ ^ 0 ( ξ ¯ j ) ] and δ H ( k ) = δ c T ( ξ ( t k ) ) [ g ( ξ ( t k ) ) + ω ( ξ ( t k ) ) σ ^ 0 ( ξ ¯ j ) ] representing the residual errors, and ψ = θ / ( θ T θ + 1 ) and ψ ( k ) = θ ( k ) / ( θ ( k ) T θ ( k ) + 1 ) .
Define the augmented state vector represented by χ = [ ξ T , ξ ¯ j T , ϑ ˜ c T ] T , incorporating the closed-loop SMD ξ ˙ = ω ( ξ ) σ ^ 0 ( ξ ¯ j ) + g ( ξ ) , and then the hybrid control system becomes
χ ˙ ( t ) = g ( ξ ) + ω ( ξ ) σ ^ 0 ( ξ ¯ j ) 0 α c Ψ ( ψ , ψ ( k ) ) ϑ ˜ c + Π ( ψ , ψ ( k ) ) , t [ t j , t j + 1 )
with
Ψ ( ψ , ψ ( k ) ) = ψ ψ T + k = 1 l c ψ ( k ) ψ ( k ) T Π ( ψ , ψ ( k ) ) = k = 1 l c α c ψ ( k ) 1 + θ ( k ) T θ ( k ) δ H ( k ) + α c ψ 1 + θ T θ δ H
and
χ ( t ) = χ ( t ) + 0 ξ ξ ¯ j 0 , t = t j + 1
with χ ( t ) = lim ε 0 χ ( t ε ) .
Remark 3.
In order to address the FTC problem for NLS systems and improve the communication resource utilization efficiency, an integrated control framework is developed which combines ISMC technology and ADP-based GCC under a dynamic ETC mechanism. The proposed approach weakens the impact of actuator failures on system stability, as well as ensures that the certain limits of the control performance indicators exist under the ETC mode.
Remark 4.
The simulation experiment in this paper is implemented using MATLAB R2021a. To clearly present the designed control scheme, the relevant control block diagram of the proposed control strategy is shown in Figure 1.

4.3. Stability Analysis

Theorem 4.
Consider the NLS system (16), assume that Assumptions 2−5 hold and the control strategy is presented in (55). The critic NN weight updating mechanism is formulated in (59). Hence, the dynamic ETC condition is established as
e j 2 β 2 σ ˘ 2 2 λ min ( Q ) ξ 2 + σ ^ 0 T ( ξ ¯ j ) R σ ^ 0 ( ξ ¯ j ) 2 r 2 ρ 2 e ¯ T
where ℏ denotes a design parameter within the interval [ 0 , 1 ] , e ¯ T is the triggering threshold and the dynamics of internal signal σ ˘ 2 are given by
σ ˘ ˙ 2 = β 2 σ ˘ 2 + λ min ( Q ) ξ 2
The parameter approximation error ϑ ˜ c and sliding dynamics (16) both achieve UUB.
Proof. 
The candidate Lyapunov function is
L ( t ) = L 1 ( t ) + L 2 ( t ) + L 3 ( t ) + L 4 ( t )
where L 1 ( t ) = Γ * ( ξ ¯ j ) , L 2 ( t ) = Γ * ( ξ ( t ) ) , L 3 ( t ) = 1 2 ϑ ˜ c T ϑ ˜ c , and L 4 ( t ) = σ ˘ 2 . The complete proof involves consideration of two cases.
Case 1: During t [ t j , t j + 1 ) , j N , the events are not triggered. It has
L ˙ 1 ( t ) = Γ ˙ * ( ξ ¯ j ) = 0 .
Based on L 2 ( t ) = Γ * ( ξ ( t ) ) , and according to ξ ˙ = ω ( ξ ) σ ^ 0 ( ξ ¯ j ) + ε ( ξ ( t ) ) ϱ ( ξ ( t ) ) + g ( ξ ) , we have
L ˙ 2 ( t ) = Γ * ( ξ ) T ξ ˙ = ( Γ * ( ξ ) ) T [ g ( ξ ) + ε ( ξ ) ϱ ( ξ ) + ω ( ξ ) σ ^ 0 ( ξ ¯ j ) ] = ( Γ * ( ξ ) ) T g ( ξ ) + ( Γ * ( ξ ) ) T ω ( ξ ) σ ^ 0 ( ξ ¯ j ) + ( Γ * ( ξ ) ) T ε ( ξ ) ϱ ( ξ ) .
According to (26), one has
L ˙ 2 ( t ) ξ T Q ξ + ( Γ * ( ξ ) ) T h ( ξ ) [ σ ^ 0 ( ξ ¯ j ) σ 0 * ( ξ ) ] 1 4 ( Γ * ( ξ ) ) T h ( ξ ) R 1 ω T ( ξ ) Γ * ( ξ ) ξ T Q ξ + ( σ 0 * ( ξ ) ) T R σ 0 * ( ξ ) 2 ( σ 0 * ( ξ ) ) T R σ ^ 0 ( ξ ¯ j ) .
Based on the fact that 2 a T R b = ( a b ) T R ( a b ) a T R a b T R b , it has
2 ( σ 0 * ( ξ ) ) T R σ ^ 0 ( ξ ¯ j ) + ( σ 0 * ( ξ ) ) T R σ 0 * ( ξ ) = ( σ 0 * ( ξ ) σ ^ 0 ( ξ ¯ j ) ) T R ( σ 0 * ( ξ ) σ ^ 0 ( ξ ¯ j ) ) σ ^ 0 T ( ξ ¯ j ) R σ ^ 0 ( ξ ¯ j ) .
Considering Assumptions 3 and 5, (54), (55), and assuming that δ c ( ξ ) 2 δ ¯ c m 2 and κ c ( ξ ) 2 κ ¯ c m 2 , it has
( σ 0 * ( ξ ) σ ^ 0 ( ξ ¯ j ) ) T R ( σ 0 * ( ξ ) σ ^ 0 ( ξ ¯ j ) ) 2 r 2 ρ 2 e j ( t ) 2 + 2 1 2 R 1 ω m a x [ κ c T ( ξ ¯ j ) ϑ ˜ c + δ c ( ξ ¯ j ) ] 2 2 r 2 ρ 2 e j ( t ) 2 + 1 2 R 1 2 ω m a x 2 κ ¯ c m ϑ ˜ c + δ ¯ c m 2 2 r 2 ρ 2 e j ( t ) 2 + R 1 2 ω m a x 2 κ ¯ c m 2 ϑ ˜ c 2 + R 1 2 ω m a x 2 δ ¯ c m 2
Based on Assumption 4, (71), (72), and (73), it follows that
L ˙ 2 ( t ) 2 λ min ( Q ) ξ 2 ( 1 2 ) λ min ( Q ) ξ 2 + 2 r 2 ρ 2 e j ( t ) 2 . + R 1 2 ω m a x 2 κ ¯ c m 2 ϑ ˜ c 2 + R 1 2 ω m a x 2 δ ¯ c m 2 σ ^ 0 T ( ξ ¯ j ) R σ ^ 0 ( ξ ¯ j )
Furthermore, the derivative of L 3 ( t ) is
L ˙ 3 ( t ) = α c ϑ ˜ c T ψ ψ T + k = 1 l c ψ ( k ) ψ ( k ) T ϑ ˜ c + α c ϑ ˜ c T ψ 1 + θ T θ δ H + k = 1 l c α c ϑ ˜ c T ψ ( k ) 1 + θ ( k ) T θ ( k ) δ H ( k ) .
Based on the fact a T b ( 1 / 2 ) a T a + ( 1 / 2 ) b T b , it can be derived that
α c ϑ ˜ c T ψ 1 + θ T θ δ H α c 2 ϑ ˜ c T ψ ψ T ϑ ˜ c + α c 2 δ H T δ H
k = 1 l c α c ϑ ˜ c T ψ ( k ) 1 + θ ( k ) T θ ( k ) δ H ( k ) α c 2 k = 1 l c ϑ ˜ c T ψ ( k ) ψ ( k ) T ϑ ˜ c + α c 2 k = 1 l c δ H ( k ) T δ H ( k )
with δ H being the residual error. Suppose that δ H b δ H . According to (75), (76), and (77), it follows that
L ˙ 3 ( t ) α c 2 λ min ( Ψ ( ψ , ψ ( k ) ) ϑ ˜ c 2 + α c ( l c + 1 ) 2 b δ H 2 .
where Ψ ( ψ , ψ ( k ) ) is given in (64).
Based on (69), (74), and (78), we have
L ˙ ( t ) 2 λ min ( Q ) ξ 2 ( 1 2 ) λ min ( Q ) ξ 2 + 2 r 2 ρ 2 e j ( t ) 2 1 2 α c λ min ( Ψ ( ψ , ψ ( k ) ) ) ϑ ˜ c 2 + σ ˘ ˙ 2 + α c ( l c + 1 ) 2 b δ H 2 + R 1 2 ω m a x 2 κ ¯ c m 2 ϑ ˜ c 2 + R 1 2 ω m a x 2 δ ¯ c m 2 σ ^ 0 T ( ξ ¯ j ) R σ ^ 0 ( ξ ¯ j )
The triggering condition, constructed as (66), is activated when one of the following inequalities is satisfied
ξ > 1 α c l c + 1 b δ H 2 + 2 R 1 2 ω m a x 2 δ ¯ c m 2 2 λ min ( Q ) = D 1
ϑ ˜ c > α c ( l c + 1 ) b δ H 2 + 2 R 1 2 ω m a x 2 δ ¯ c m 2 α c λ min ( Ψ ( ψ , ψ ( k ) ) ) 2 R 1 2 ω m a x 2 κ ¯ c m 2 = D 2
It can obtain that L ˙ ( t ) < 0 under the condition α c λ min ( Ψ ( ψ , ψ ( k ) ) ) 2 R 1 2 ω m a x 2 κ ¯ c m 2 > 0 .
This demonstrates that the UUB property holds for both ξ ( t ) and ϑ ˜ c .
Case 2: During the trigger moment t = t j , the derivative of L ( t ) is constructed as
Δ L ( t j + 1 ) = L ( t j + 1 + ) L ( t j + 1 ) = 1 2 [ ϑ ˜ c T ( ξ ( t j + ) ) ϑ ˜ c ( ξ ( t j + ) ) ϑ ˜ c T ( ξ ( t j ) ) ϑ ˜ c ( ξ ( t j ) ) ] + [ σ ˘ 2 ( t j + 1 + ) σ ˘ 2 ( t j ) ] + [ Γ * ( ξ ( t j + ) ) Γ * ( ξ ( t j ) ) ] + [ Γ * ( ξ ¯ j + 1 ) Γ * ( ξ ¯ j ) ] .
According to case 1, L ( t ) exhibits continuous and monotonic decrease within [ t j , t j + 1 ) . Therefore, it follows that
L ( s + t j ) < L ( t j ) , s ( 0 , t j + 1 t j ) .
Through applying limits on each side of (83) while establishing the definition L ( t j + ) = l i m s 0 + L ( t j + s ) , it obtains L ( t j + ) L ( t j ) . Then we have
Γ * ( ξ ( t j + ) ) + σ ˘ 2 ( t j + ) + 1 2 ϑ ˜ c T ( ξ ( t j + ) ) ϑ ˜ c ( ξ ( t j + ) ) + Γ * ( ξ ¯ ( t j ) ) 1 2 ϑ ˜ c T ( ξ ( t j ) ) ϑ ˜ c ( ξ ( t j ) ) + Γ * ( ξ ¯ j ) + σ ˘ 2 ( t j ) + Γ * ( ξ ( t j ) ) .
Additionally, Γ * ( ξ ) is continuous differentiability properties at triggering moments t j . Based on case 1, it obtains
Γ * ( ξ ¯ ( t j + 1 ) ) Γ * ( ξ ¯ j ) .
Considering (84) and (85), it follows that L ( t ) 0 .
Through the analysis of these two cases, the proof of the theorem is completed. □

5. Simulation

In this section, consider the following NLS systems:
ξ ˙ 1 = ξ 11 + ξ 12 cos ξ 11 0.5 ( ξ 11 + ξ 12 ) + sin 2 ξ 11 + 0 1 σ 1 ( ξ 1 ) + β 1 ( t t T ) f s 1 ( t ) + 1 0 ϱ 1 ( ξ ) ; ξ ˙ 2 = ξ 21 + ξ 22 cos ( 0.5 ξ 21 ) ξ 21 + cos 2 ( ξ 21 ) + 0 1 σ 2 ( ξ 2 ) + β 2 ( t t T ) f s 2 ( t ) + 1 0 ϱ 2 ( ξ ) ,
where ξ 1 = [ ξ 11 , ξ 12 ] T R 2 and ξ 2 = [ ξ 21 , ξ 22 ] T R 2 are the system states, σ 1 ( ξ 1 ) R and σ 2 ( ξ 2 ) R are the control inputs. The interconnected disturbances is ϱ 1 = p 1 ξ 11 sin ( ξ 12 ξ 21 ) and ϱ 2 = p 2 ξ 21 sin ( ξ 12 ξ 21 ) with p 12 [ 0.5 , 0.5 ] and p 21 [ 0.25 , 0.25 ] . Here, we further choose ϱ 1 M ( ς ( ξ ) ) = 0.5 ξ 11 ; ϱ 2 M ( ς ( ξ ) ) = 0.5 ξ 21 . Assuming that an actuator failure suddenly occurs at t T = 4 s , the fault dynamics can be represented as
β 1 ( t t T ) f s 1 ( t ) = 2 sin 0.5 t , t 4 0 , t < 4
β 2 ( t t T ) f s 2 ( t ) = 2.5 cos t , t 4 0 , t < 4
In this simulation, the original states of subsystem 1 and subsystem 2 are selected as ξ 10 = ξ 20 = [ 1 , 1 ] T and the relevant parameters are set to R 1 = 1 , R 2 = 1 , Q 1 = diag { 1 , 1 } , Q 2 = diag { 1 , 1 } . The cost function can be defined as
J ( ξ ) = ϑ ^ c T κ c ( ξ )
where ϑ ^ c = ϑ ^ c 1 , ϑ ^ c 2 T , κ c ( ξ ) = κ c 1 ( ξ ) , κ c 2 ( ξ ) , κ c 1 ( ξ ) = [ ξ 11 2 , ξ 11 ξ 12 , ξ 12 2 , ξ 11 4 , ξ 11 3 ξ 12 , ξ 11 2 ξ 12 2 ,   ξ 11 ξ 12 3 , ξ 12 4 ] T and κ c 2 ( ξ ) = ξ 21 2 , ξ 21 ξ 22 , ξ 22 2 , ξ 21 4 , ξ 21 3 ξ 22 , ξ 21 2 ξ 22 2 , ξ 21 ξ 22 3 , ξ 22 4 T . Moreover, the parameters in dynamic triggering conditions are set as = 0.5 , ρ = 5 , r = 1 , β 2 = 0.1 .
Figure 2 and Figure 3 illustrate the state trajectories of the subsystems. It is clear that all state trajectories can ultimately converge to the equilibrium point.
Figure 4 presents the SMC law σ 11 for subsystem 1, and the dynamic ETC input σ 01 for subsystem 1 is depicted in Figure 5.
Similarly, the SMC rule σ 12 for subsystem 2 is shown in Figure 6, while the dynamic ETC input σ 02 for subsystem 2 is illustrated in Figure 7.
Figure 8 and Figure 9 demonstrate the convergence behaviors of the critic NNs’ weights for subsystem 1 and subsystem 2. As shown in the figures, the weight vectors of the critic NNs for both subsystems converge within 150 s, indicating that the adaptive learning strategy designed in this article can enable the system to achieve asymptotic stability, thereby validating the UUB property of the control system.
Figure 10 presents the dynamic ETC condition. Figure 11 shows the situation of the sampling period during the learning process, which indicates that the control approach presented in this article can effectively improve the utilization of communication resources.
The internal signal evolution of the dynamic ETC method is depicted in Figure 12. It is clear that the internal signal σ ˘ 2 remains positive at all times.

6. Conclusions

In our research, a new ISMC approach with a dynamic event-triggered mechanism has been designed for NLS systems with actuator faults. Firstly, ISMC technology keeps the motion trajectory of the subsystems on the SMS and eliminates the impact of actuator failures in NLS systems. Secondly, a dynamic event-triggered decentralized GCC scheme has been presented via the ADP method to ensure system stability while also guaranteeing that the cost function of the entire NLS system has an upper bound. In our design, the developed method decreases the consumption of communication resources. Moreover, the system states and the weight values of the critic NN have been proved to be UUB using the Lyapunov stability theorem. Finally, the effectiveness of the presented control strategy has been verified by a simulation example. In future research, the fault-tolerant control problems of more complex systems will be discussed and studied, such as sensor failure problems, control problems with asymmetric input constraints, and so on.

Author Contributions

Conceptualization, Y.L.; software, X.M.; methodology, Y.L.; validation, X.M. and K.Z.; writing—original draft preparation, X.M. and Y.L.; rigorous analysis, K.Z.; supervision, X.C.; data curation, K.Z. and H.J.; writing—review and editing, X.M.; funding acquisition, Y.L.; visualization, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62403329, 62203311).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

In terms of the data availability, if a researcher requires data from this article, the corresponding author can provide the simulation data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pham, H.X.; La, H.M.; Feil-Seifer, D.; Deans, M.C. A distributed control framework of multiple unmanned aerial vehicles for dynamic wildfire tracking. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1537–1548. [Google Scholar] [CrossRef]
  2. Zhang, J.; Yang, D.; Zhang, H.; Su, H. Adaptive secure practical fault-tolerant output regulation of multiagent systems with DoS attacks by asynchronous communications. IEEE Trans. Netw. Sci. Eng. 2023, 10, 4046–4055. [Google Scholar] [CrossRef]
  3. Zuo, K.; Wu, L. A review of decentralized and distributed control approaches for islanded microgrids: Novel designs, current trends, and emerging challenges. Electr. J. 2022, 35, 107138. [Google Scholar] [CrossRef]
  4. Rashid, S.M. A novel voltage control system based on deep neural networks for microGrids including communication delay as a complex and large-scale ystem. ISA Trans. 2025, 158, 344–362. [Google Scholar] [CrossRef]
  5. Dehghani, M. Decentralized stabilization of large-scale linear parameter varying systems. ISA Trans. 2024, 148, 336–348. [Google Scholar] [CrossRef]
  6. Feydi, A.; Elloumi, S.; Braiek, N.B. Robust decentralized optimal stabilizability of continuous time uncertain interconnected systems: Application to large scale power system. Optim. Control Appl. Methods 2022, 43, 1604–1622. [Google Scholar] [CrossRef]
  7. Du, P.; Liang, H.; Zhao, S.; Ahn, C.K. Neural-based decentralized adaptive finite-time control for nonlinear large-scale systems with time-varying output constraints. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3136–3147. [Google Scholar] [CrossRef]
  8. Aboudonia, A.; Martinelli, A.; Lygeros, J. Passivity-based decentralized control for discrete-time large-scale systems. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May 2021; pp. 2037–2042. [Google Scholar]
  9. Qi, Q.; Ju, P.; Zhang, H.; Cui, P. Optimal control and stabilisation for large-scale systems with imposed constraints. IET Control Theory Appl. 2020, 14, 1300–1307. [Google Scholar] [CrossRef]
  10. Wang, Y.S.; Matni, N. Localized LQG optimal control for large-scale systems. In Proceedings of the 2016 American Control Conference (ACC), Boston, MA, USA, 6–8 July 2016; pp. 1954–1961. [Google Scholar]
  11. Liang, X.; Yan, Y.; Wang, W.; Su, T.; He, G.; Li, G.; Hou, Z.G. Adaptive human–robot interaction torque estimation with high accuracy and strong tracking ability for a lower limb rehabilitation robot. IEEE/ASME Trans. Mechatron. 2024, 29, 4814–4825. [Google Scholar] [CrossRef]
  12. Villa, D.K.D.; Brandao, A.S.; Carelli, R.; Sarcinelli-Filho, M. Cooperative load transportation with two quadrotors using adaptive control. IEEE Access 2021, 9, 129148–129160. [Google Scholar] [CrossRef]
  13. Ming, Z.; Zhang, H.; Zhang, J.; Xie, X. A novel actor–critic–identifier architecture for nonlinear multiagent systems with gradient descent method. Automatica 2023, 155, 111128. [Google Scholar] [CrossRef]
  14. Ming, Z.; Zhang, H.; Li, Y.; Liang, Y. Mixed H2/H control for nonlinear closed-loop Stackelberg games with application to power systems. IEEE Trans. Autom. Sci. Eng. 2022, 21, 69–77. [Google Scholar] [CrossRef]
  15. Shi, H.; Gao, W.; Jiang, X.; Su, C.; Li, P. Two-dimensional model-free Q-learning-based output feedback fault-tolerant control for batch processes. Comput. Chem. Eng. 2024, 182, 108583. [Google Scholar] [CrossRef]
  16. Sun, T.; Sun, X.M. An adaptive dynamic programming scheme for nonlinear optimal control with unknown dynamics and its application to turbofan engines. IEEE Trans. Ind. Inform. 2020, 17, 367–376. [Google Scholar] [CrossRef]
  17. Fan, S.; Wang, T.; Wu, J.; Qiu, J. Optimal containment control for a class of heterogeneous multi-agent systems with actuator faults. Int. J. Robust Nonlinear Control 2024, 34, 849–865. [Google Scholar] [CrossRef]
  18. Xu, J.; Wang, J.; Rao, J.; Zhong, Y.; Wang, H. Adaptive dynamic programming for optimal control of discrete-time nonlinear system with state constraints based on control barrier function. Int. J. Robust Nonlinear Control 2022, 32, 3408–3424. [Google Scholar] [CrossRef]
  19. Wu, Q.; Zhao, B.; Liu, D. Adaptive dynamic programming-based decentralised control for large-scale nonlinear systems subject to mismatched interconnections with unknown time-delay. Int. J. Syst. Sci. 2020, 51, 2883–2898. [Google Scholar] [CrossRef]
  20. Tang, Y.; Yang, X.; Dong, N. Adaptive dynamic programming for decentralized neuro-control of nonlinear systems subject to mismatched interconnections. Optim. Control Appl. Methods 2022, 43, 1501–1519. [Google Scholar] [CrossRef]
  21. Yang, X.; He, H.; Zhong, X. Approximate dynamic programming for nonlinear-constrained optimizations. IEEE Trans. Cybern. 2019, 51, 2419–2432. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, M.; Liu, Q.; Zhang, L.; Duan, G.; Cao, X. Adaptive dynamic programming-based fault-tolerant attitude control for flexible spacecraft with limited wireless resources. Sci. China Inf. Sci. 2023, 66, 202201. [Google Scholar] [CrossRef]
  23. Du, Y.; Jiang, B.; Ma, Y.; Cheng, Y. Robust ADP-based sliding-mode fault-tolerant control for nonlinear systems with application to spacecraft. Appl. Sci. 2022, 12, 1673. [Google Scholar] [CrossRef]
  24. Xiao, L.; Tan, Y.; Du, Y.; Zhang, X. Intelligent sliding mode fault-tolerant control for aircraft engines with actuator dynamics and faults based on adaptive dynamic programming. Aeronaut. J. 2025, 129, 651–670. [Google Scholar] [CrossRef]
  25. Li, X.; Liu, H.; Wang, Q.; Liu, Y. Fault-tolerant control for launch vehicle based on fuzzy sliding mode and adaptive dynamic programming. In Proceedings of the 2021 CAA Symposium on Fault Detection, Supervision, and Safety for Technical Processes (SAFEPROCESS), Chengdu, China, 17–18 December 2021; pp. 1–7. [Google Scholar]
  26. Xu, D.; Wang, Q.; Li, Y. Optimal guaranteed cost tracking of uncertain nonlinear systems using adaptive dynamic programming with concurrent learning. Int. J. Control Autom. Syst. 2020, 18, 1116–1127. [Google Scholar] [CrossRef]
  27. Huong, P.T.; Phat, V.N. Guaranteed cost finite-time control of large-scale singular systems with interconnected delays. Trans. Inst. Meas. Control 2022, 44, 1103–1112. [Google Scholar] [CrossRef]
  28. Xie, C.H.; Yang, G.H. Approximate guaranteed cost fault-tolerant control of unknown nonlinear systems with time-varying actuator faults. Nonlinear Dyn. 2016, 83, 269–282. [Google Scholar] [CrossRef]
  29. Liu, C.; Chu, Z.; Duan, Z.; Zhang, H.; Ma, Z. Decentralized event-triggered tracking control for unmatched interconnected systems via particle swarm optimization-based adaptive dynamic programming. IEEE Trans. Cybern. 2024, 54, 6895–6908. [Google Scholar] [CrossRef]
  30. Liang, Y.; Luo, Y.; Su, H.; Zhang, X.; Chang, H.; Zhang, J. Event-triggered explorized IRL-based decentralized fault-tolerant guaranteed cost control for interconnected systems against actuator failures. Neurocomputing 2025, 615, 128837. [Google Scholar] [CrossRef]
  31. Wang, J.; Zheng, Y.; Ding, J.; Zhang, H.; Sun, J. Memory-based event-triggered fault-tolerant consensus control of nonlinear multi-agent systems and its applications. IEEE Trans. Autom. Sci. Eng. 2024, 22, 7941–7954. [Google Scholar] [CrossRef]
  32. Zhang, H.; Zhang, Y.; Zhao, X. Event-triggered adaptive dynamic programming for hierarchical sliding-mode surface-based optimal control of switched nonlinear systems. IEEE Trans. Autom. Sci. Eng. 2023, 21, 4851–4863. [Google Scholar] [CrossRef]
  33. Hu, C.; Zou, Y.; Li, S. Adaptive dynamic programming-based decentralized event-triggered control of large-scale nonlinear systems. Asian J. Control 2022, 24, 1542–1556. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Yang, Y.; Xie, X.; Xu, C.; Yang, H. Dynamic event-triggered consensus control for multi-agent systems using adaptive dynamic programming. IEEE Access 2022, 10, 110285–110293. [Google Scholar] [CrossRef]
  35. Cui, Y.; Su, H.; Hu, X. Decentralized online adaptive control of unknown large-scale systems via dynamic event-triggered strategy. In Proceedings of the 2023 International Annual Conference on Complex Systems and Intelligent Science (CSIS-IAC), Shenzhen, China, 20–22 October 2023; pp. 789–795. [Google Scholar]
  36. Wang, D.; Ma, H.; Yan, P.; Liu, D. A novel self-learning optimal control approach for decentralized guaranteed cost control of a class of complex nonlinear systems. In Proceedings of the 2015 Sixth International Conference on Intelligent Control and Information Processing (ICICIP), Wuhan, China, 26–28 November 2015; pp. 385–391. [Google Scholar]
  37. Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  38. Liu, D.; Wang, D.; Wang, F.Y.; Li, H.; Yang, X. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems. IEEE Trans. Cybern. 2014, 44, 2834–2847. [Google Scholar] [CrossRef]
  39. Wang, D.; Liu, D.; Mu, C.; Ma, H. Decentralized guaranteed cost control of interconnected systems with uncertainties: A learning-based optimal control strategy. Neurocomputing 2016, 214, 297–306. [Google Scholar] [CrossRef]
  40. Zhou, Z.; Zhang, J.; Wang, Y.; Yang, D.; Liu, Z. Adaptive neural control of superheated steam system in ultra-supercritical units with output constraints based on disturbance observer. IEEE Trans. Circuits Syst. I Regul. Pap. 2025, 72, 2701–2711. [Google Scholar] [CrossRef]
Figure 1. The control block diagram.
Figure 1. The control block diagram.
Actuators 14 00420 g001
Figure 2. State trajectories of subsystem 1.
Figure 2. State trajectories of subsystem 1.
Actuators 14 00420 g002
Figure 3. State trajectories of subsystem 2.
Figure 3. State trajectories of subsystem 2.
Actuators 14 00420 g003
Figure 4. SMC signal for subsystem 1.
Figure 4. SMC signal for subsystem 1.
Actuators 14 00420 g004
Figure 5. Dynamic ETC input for subsystem 1.
Figure 5. Dynamic ETC input for subsystem 1.
Actuators 14 00420 g005
Figure 6. SMC signal for subsystem 2.
Figure 6. SMC signal for subsystem 2.
Actuators 14 00420 g006
Figure 7. Dynamic ETC input for subsystem 2.
Figure 7. Dynamic ETC input for subsystem 2.
Actuators 14 00420 g007
Figure 8. NN weight evolutions for subsystem 1.
Figure 8. NN weight evolutions for subsystem 1.
Actuators 14 00420 g008
Figure 9. NN weight evolutions for subsystem 2.
Figure 9. NN weight evolutions for subsystem 2.
Actuators 14 00420 g009
Figure 10. The triggering condition with | | e j | | 2 and e ¯ T .
Figure 10. The triggering condition with | | e j | | 2 and e ¯ T .
Actuators 14 00420 g010
Figure 11. Sampling period during learning process.
Figure 11. Sampling period during learning process.
Actuators 14 00420 g011
Figure 12. Internal signal σ ˘ 2 .
Figure 12. Internal signal σ ˘ 2 .
Actuators 14 00420 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, Y.; Mao, X.; Zhang, K.; Liu, L.; Jiang, H.; Chen, X. Decentralized Sliding Mode Control for Large-Scale Systems with Actuator Failures Using Dynamic Event-Triggered Adaptive Dynamic Programming. Actuators 2025, 14, 420. https://doi.org/10.3390/act14090420

AMA Style

Liang Y, Mao X, Zhang K, Liu L, Jiang H, Chen X. Decentralized Sliding Mode Control for Large-Scale Systems with Actuator Failures Using Dynamic Event-Triggered Adaptive Dynamic Programming. Actuators. 2025; 14(9):420. https://doi.org/10.3390/act14090420

Chicago/Turabian Style

Liang, Yuling, Xiao Mao, Kun Zhang, Lei Liu, He Jiang, and Xiangmin Chen. 2025. "Decentralized Sliding Mode Control for Large-Scale Systems with Actuator Failures Using Dynamic Event-Triggered Adaptive Dynamic Programming" Actuators 14, no. 9: 420. https://doi.org/10.3390/act14090420

APA Style

Liang, Y., Mao, X., Zhang, K., Liu, L., Jiang, H., & Chen, X. (2025). Decentralized Sliding Mode Control for Large-Scale Systems with Actuator Failures Using Dynamic Event-Triggered Adaptive Dynamic Programming. Actuators, 14(9), 420. https://doi.org/10.3390/act14090420

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop