You are currently on the new version of our website. Access the old version .
Fractal FractFractal and Fractional
  • Article
  • Open Access

18 November 2025

Boundary Control for Consensus in Fractional-Order Multi-Agent Systems Under DoS Attacks and Actuator Failures

,
,
,
,
and
1
School of Information Science and Engineering, Linyi University, Linyi 276005, China
2
School of Computer and Information, Anhui Polytechnic University, Wuhu 241000, China
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Fractional Dynamics and Control in Multi-Agent Systems and Networks

Abstract

This paper investigates the consensus problem in fractional-order multi-agent systems (FOMASs) under Denial of Service (DoS) attacks and actuator faults. A boundary control strategy is proposed, which reduces dependence on internal sensors and actuators by utilizing only the state information at the system boundaries, significantly lowering control costs. To address DoS attacks, a buffer mechanism is designed to store valid control signals during communication interruptions and apply them once communication is restored, thereby enhancing the system’s robustness and stability. Additionally, this study considers the impact of actuator performance fluctuations on control effectiveness and proposes corresponding adjustment strategies to ensure that the system maintains consensus and stability even in the presence of actuator failures or performance variations. Finally, the effectiveness of the proposed method is validated through numerical experiments. The results show that, even under DoS attacks and actuator faults, the system can still successfully achieve consensus and maintain good stability, demonstrating the feasibility and effectiveness of this control approach in complex environments.

1. Introduction

MASs are systems composed of multiple agents, where each agent is an autonomous entity capable of making its own decisions and interacting with other agents and the environment. With the development of artificial intelligence, communication technologies, sensor technologies, and other fields, MASs have found widespread application in various domains. In robotics and automation, MASs are extensively used in drone swarm collaboration [1], robot formation control [2], and autonomous vehicles [3], where multiple robots collaborate to complete complex tasks. In intelligent transportation systems, MASs are applied in vehicle-to-everything communication [4], intelligent traffic signal control [5], and traffic flow management [6], improving traffic efficiency and safety. In MASs, traditional distributed control methods often face challenges such as communication delays [7], information loss, and reliance on a large number of sensors and actuators, especially as the system size increases or external attacks occur. These issues can lead to reduced control efficiency and even instability in the system. To address these problems, boundary control is introduced as an effective solution. Boundary control concentrates the control input at the system’s boundaries, rather than relying on the state of each internal agent [8], thereby reducing dependence on internal information and significantly lowering control costs. Additionally, boundary control has shown strong robustness in handling situations where internal spatial points are unavailable, communication is interrupted, or system failures occur, ensuring that the system can maintain consistency and stability even in harsh environments.
In MASs, traditional integer-order models often struggle to fully capture the complex dynamics and long-term memory effects present in the system. Rational systems introduce the concept of fractional calculus, which can describe the system’s non-locality and history dependence, giving it a significant advantage in modeling complex systems with memory effects [9,10,11,12]. Liu et al. studied the adaptive bifunctional containment control problem for non-affine FOMASs with disturbances and completely unknown higher-order dynamics, proposing a distributed adaptive control method [13]. Zhang et al. studied the consensus control problem for a class of FOMASs. They used radial basis function neural networks in the controller design. This approach approximates unknown nonlinear functions [14]. Liu et al. studied the adaptive containment control problem for a class of FOMASs with time-varying parameters and disturbances, proposing a new distributed error compensation method [15]. These works provide rich theoretical and methodological support for the difficulties of nonlinearity, unspecified dynamics, and perturbations in FOMASs. Leaning on the intrinsic advantages of fractional-order models, this work investigates the consensus control issue of FOMASs, fully leveraging the model’s ability in handling complex system dynamics and long-memory effects.
DoS attacks disrupt communication links, causing intermittent failure of information exchange and directly violating the connectivity assumption underlying consensus control, leading at best to performance degradation and at worst to instability and cascading failures [16,17,18,19]. Therefore, it is essential to systematically investigate boundary-control strategies for consensus under DoS attacks to ensure reliable operation in mission-critical scenarios. Chen et al. proposed a signal-based communication scheme that addresses the security vulnerabilities in the local communication network [20]. Zhao et al. investigated secure cooperative tracking control for uncertain heterogeneous nonlinear MASs under DoS attacks. To ensure bounded tracking errors, they designed a sampling data neural network controller [21]. Zhao et al. investigated a bipartite containment control method for MASs susceptible to DoS attacks and external disturbances. By adopting a fully distributed control protocol, they avoid reliance on global information and introduce an attack compensator to mitigate the adverse effects of DoS attacks [22]. The above studies mitigate the impact of DoS on consensus control via event-triggered schemes, sampled-data neural networks, and hybrid triggering, significantly improving robustness and communication efficiency. Unlike prior work, this paper adopts a buffer mechanism. When DoS prevents updates to the control signal, the actuator does not zero the input but applies the most recent valid control input stored in the buffer, thereby reducing performance drops and instability risk.
Actuator failures result in control input malfunctions or deviations. The stability and consensus of MASs may be destroyed by actuator faults [23,24,25,26]. Therefore, designing control schemes to handle actuator faults is essential to guarantee the reliable operation of the system in fault cases [27,28]. Dario Giuseppe Lui and his co-workers have designed a new control protocol. It is distributed and adaptive. It is designed to enhance the reliability of the network system. It also improves the system’s recovery capability in the event of actuator failures [29]. Su et al. explored the issue of adaptive, event-driven optimal control for fault-tolerant consensus. Their approach tackles the challenges of unknown control gains and actuator fault parameters. It also enhances the practicality of optimal consensus control [30]. The above studies propose innovative fault-tolerant control methods that offer effective solutions to the consensus and stability issues in MASs caused by actuator faults, providing strong guarantees for the reliability and consistency of MASs in practical applications.
By analyzing the above studies, this paper primarily focuses on the consensus boundary control problem FOMASs under DoS attacks and actuator failures. Specifically, the innovative contributions of this paper include:
1.
The boundary control-based strategy proposed in this paper significantly reduces the reliance on internal state information compared to traditional distributed control methods [31,32,33,34]. By controlling the system using only boundary information, it avoids the high demands on sensors and actuators, thus greatly reducing control costs. This approach has a clear advantage in large-scale systems, especially in situations where communication is interrupted or sensors are unavailable.
2.
Unlike existing event-triggered control schemes [35], this paper introduces a buffer mechanism to address communication interruptions caused by DoS attacks. When communication is interrupted, the control signal is not set to zero but instead uses the most recent valid control input stored in the buffer, thereby reducing the risk of performance degradation and system instability.
3.
This paper incorporates variations in actuator efficiency into the controller design, allowing for automatic adjustments when actuator failures or performance fluctuations occur. This innovation enables the control system to adapt in real-time to changing operating conditions, ensuring the stability and consistency of the system in practical operations.

2. Problem Description and Preliminaries

The FOMAS [36] under DoS attacks and actuator failures studied in this paper are described as
D t τ t 0 c z i ( x , t ) = Ω 2 z i ( x , t ) x 2 + A z i ( x , t ) + f ( z i ( x , t ) ) , z i ( x , t ) x | x = 0 = 0 , z i ( x , t ) x | x = L = u i ( t ) , z i ( x , 0 ) = z i 0 ( x ) , i N ,
where x represents the spatial variable and t represents the time variable; z i ( x , t ) represents the state of the i-th agent; u i ( t ) represents the control input; the matrix Ω R n × n is symmetric and positive definite; A R n × n ; f ( z i ( x , t ) ) represents a nonlinear function; z i 0 ( x ) are the initial values.
Remark 1.
The referenced model uses a conventional linear control system, where the control input directly affects the state variables. The model in this paper is based on fractional-order differential equations, which capture the system’s long-term memory effects and non-local dynamics. It also incorporates an adjustment mechanism for changes in actuator efficiency to address actuator fluctuations.
The leader of FOMAS (1) is described as
D t τ t 0 c y ( x , t ) = Ω 2 y ( x , t ) x 2 + A y ( x , t ) + f ( y ( x , t ) ) , y ( x , t ) x | x = 0 = 0 , y ( x , t ) x | x = L = 0 , y ( x , 0 ) = y 0 ( x ) ,
where y ( x , t ) is the state of the leader, y 0 ( x ) represents the initial values.
Consider an undirected graph with N nodes represented by set Q = ( S , H ) , where set S = { 1 , 2 , , N } defines the nodes and set H S × S defines the edges, and the Laplacian matrix L of Q is defined as:
L i j = d = 1 , d m g m d , if m = n , g m n , if m n and ( m , n ) H , 0 , otherwise ,
Here, g m n represents the connection weight between nodes m and n.
The selected boundary controller is expressed as follows:
u i ( t ) = k t ρ i ( t ) j = 1 N g i j ( z i ( L , t ) z j ( L , t ) ) ,
and
k t = k 1 , t Θ 0 ( t 0 , t ) , k 2 , t Θ a ( t 0 , t ) ,
where ρ i ( t ) represents the time-varying parameter of the i- t h agent’s actuator, k t is the control gain.
Remark 2.
Incorporating actuator performance variations into the controller design enhances the method’s fault tolerance. By adjusting in real time to actuator failures and performance fluctuations, the system can adapt to changes in the real-world environment, where actuator efficiency may not always be optimal.
For clarity, t ^ is introduced to distinguish between time intervals. Specifically, when t Θ 0 ( t 0 , t ) , t ^ = t ; and when t belongs to the interval [ t 2 r + 1 , t 2 ( r + 1 ) ) , t ^ = t 2 r + 1 . Two key time intervals are defined: Θ a ( t 0 , t ) , which represents the interval with DoS attacks, and Θ 0 ( t 0 , t ) , which represents the interval without DoS attacks. The attack time intervals are denoted as T r = [ t 2 r + 1 , t 2 ( r + 1 ) ) . The following formula holds:
Θ a ( 0 , t ) = r N T r [ 0 , t ] ,
where N is the set of natural numbers. The time interval without any attacks is then: Θ 0 ( t 0 , t ) = [ 0 , t ] / Θ a ( 0 , t ) .
Remark 3.
During a DoS attack, communication is interrupted, and the agents are unable to receive new control inputs. In this case, the system uses the most recent valid control input stored in the buffer to update the agents’ states. Specifically, each agent stores the last successfully transmitted control signal in the buffer. When communication is interrupted, u ( t ) is the most recent valid signal taken from the buffer, which means that the agent will use the historical data of u ( t ) for state updates. When the attack ends and communication is restored, the most recent data stored in the buffer will be used to update the agents’ states. This strategy ensures that the system can maintain consistency during the attack without causing significant performance degradation or instability.
Definition 1
([37]). For MAS (1), given any initial condition, consensus is reached for every agent i if the following condition holds:
lim t z i ( x , t ) 1 N j = 1 N z j ( x , t ) 2 0 .
Definition 2
([38]). For τ satisfying 0 < τ < 1 , and a differentiable function p ( s , t ) : [ t 0 , + ] R , the Caputo fractional derivative of order τ is defined as the following integral:
D t τ t 0 c p ( s , t ) = 1 Γ ( 1 τ ) t 0 t p ( s , v ) v 1 ( t v ) τ d v ,
where Γ ( ϕ ) = 0 + h ϕ 1 e h d h .
Definition 3
([38]). The Mittag-Leffler function is a special function commonly used in fractional calculus and dynamical systems. It is expressed as:
R α ( t ) = d = 0 t d Γ ( d α + 1 ) ,
where α > 0 .
Lemma 1
([39]). Let ξ R n be a square integrable function, and suppose ξ ( 0 ) = 0 or ξ ( l ) = 0 . Then, for any positive real symmetric matrix D, the following inequality holds:
0 l ξ T ( s ) D ξ ( s ) d s 4 l 2 π 2 0 l ξ ˙ T ( s ) D ξ ˙ ( s ) d s .
Lemma 2
([40]). Consider L as the Laplace matrix, C as a positive definite symmetric matrix, and m R N n subject to the constraint 1 N T m = 0 . Then, the following inequality holds:
λ 2 ( L ) m T ( I N C ) m m T ( L C ) m ,
here, λ 2 ( · ) refers to the smallest non-zero eigenvalue of the matrix, and 1 N T represents the transpose of a vector of ones with length N n .
Lemma 3
([41]). Let ζ ( η , t ) : R m × R + R n be a function that is differentiable with respect to time t. Then, the following inequality holds:
1 2 D t τ t 0 c ( ω T ( s , t ) ω ( s , t ) ) ω T ( s , t ) D t τ t 0 c ω ( s , t ) ,
where ω ( s , t ) is a vector function at time t.
Lemma 4
([42]). Let the function V ( t ) be continuous, and suppose it satisfies the condition: D t τ t 0 c V ( t ) V ( t ) , with constants k < 0 and 0 < τ < 1 . Then, the inequality D t τ t 0 c V ( t ) k V ( t ) holds, leading to the following conclusion:
V ( t ) V ( 0 ) R τ ( k t τ ) .
Lemma 5
([43]). If p ( s ) and q ( s ) are square-integrable vector functions, they satisfy the following conditions:
2 0 L p T ( s ) q ( s ) d s γ 0 L p T ( s ) p ( s ) d s + 1 γ 0 L q T ( s ) q ( s ) d s ,
where γ > 0 .
Assumption 1
([44]). Assume that the variation of the actuator’s efficiency is limited, meaning that the efficiency parameter ρ i ( t ) satisfies
0 < ρ ̲ i ρ i ( t ) ρ ¯ i < 1 ,
in which ρ ̲ i and ρ ¯ i are constants.
Assumption 2
([45]). Assume that g ( · ) satisfies g ( 0 ) = 0 . For any scalars x 1 and x 2 , there exists a constant μ > 0 such that the inequality | g ( x 1 ) g ( x 2 ) | μ | x 1 x 2 | holds.
Assumption 3
([46]). The number of attacks is limited. Let F 1 ( t 0 , t ) represent the number of attacks during the interval [ t 0 , t ] . There are constants γ 1 > 0 and θ 1 > 0 , such that:
F 1 ( t 0 , t ) γ 1 + t t 0 θ 1 .
Assumption 4
([46]). There are constants γ 2 > 0 and θ 2 > 0 , such that:
F 2 ( t 0 , t ) γ 2 + t t 0 θ 2 .

3. Leaderless FOMASs Consensus Under Boundary Control

Let ε ( x , t ) denote the error, then the corresponding error dynamics can be described as follows:
D t τ t 0 c ε ( x , t ) = ( I N Ω ) 2 ε ( x , t ) x 2 + ( I N A ) ε ( x , t ) + F ( ε ( x , t ) ) , ε ( x , t ) x | x = 0 = 0 , ε ( x , t ) x | x = L = u ( t ) , ε ( x , t ) = ε 0 ( x , t ) ,
where ε i ( x , t ) = z i ( x , t ) 1 N j = 1 N z j ( x , t ) .
Theorem 1.
Under Assumptions 1–3, using controller (3), the leaderless FOMAS (1) can achieve consensus under actuator failures and DoS attacks, if there exist positive constants β 1 , β 1 ˜ , γ 1 , γ 2 , σ with β 2 > β 2 ˜ > 0 and symmetric positive definite matrices U 1 , U 2 , M 1 , M 2 satisfying
Δ 1 U 1 γ 1 I n < 0 ,
Δ 2 U 2 γ 2 I n < 0 ,
β 1 ˜ ( L U 1 ) Π ¯ 12 Π ¯ 22 < 0 ,
β 2 ˜ ( L U 2 ) Σ ¯ 12 0 Σ ¯ 22 Σ ¯ 23 σ ( L U 2 ) < 0 ,
κ 1 = β 1 ln ( μ 1 μ 2 ) l 1 + β 1 + β ˜ 2 l 2 > 0 ,
where
Δ 1 = 0.25 L 2 π 2 ( Ω U 1 + U 1 Ω ) ( B U 1 + U 1 B T ) + γ 1 χ 2 I n + ( β 1 + β ˜ 1 ) U 1 ,
Δ 2 = 0.25 L 2 π 2 ( Ω U 2 + U 2 Ω ) ( B U 2 + U 2 B T ) + γ 2 χ 2 I n ( β 2 + β ˜ 2 ) U 2 ,
Π ¯ 12 = 0.25 L 2 π 2 L ( Ω U 1 + U 1 Ω ) ,
Π ¯ 22 = 0.25 L 2 π 2 L ( Ω U 1 + U 1 Ω ) k 1 λ 2 ( L ρ ̲ L ) L 1 ( I N Ω M 1 ) ,
Σ ¯ 12 = 0.25 L 2 π 2 L ( Ω U 2 + U 2 Ω ) , Σ ¯ 22 = 0.25 L 2 π 2 L ( Ω U 2 + U 2 Ω ) ,
Σ ¯ 23 = 0.5 k 2 λ 2 ( L ρ ̲ L ) L 1 ( I N Ω U 2 ) , M 1 = k 1 U 1 , M 2 = k 2 U 2 .
Proof. 
The selected Lyapunov function is expressed as follows
V ( t ) = V 1 ( t ) , t [ t 2 r , t 2 r + 1 ) , V 2 ( t ) , t [ t 2 r + 1 , t 2 ( r + 1 ) ) .
When t [ t 2 r , t 2 r + 1 ) , V ( t ) = V 1 ( t ) , that is:
V 1 ( t ) = 1 2 0 L ε T ( x , t ) ( L P 1 ) ε ( x , t ) d x .
When t [ t 2 r + 1 , t 2 ( r + 1 ) ) , V ( t ) = V 2 ( t ) , that is:
V 2 ( t ) = 1 2 0 L ε T ( x , t ) ( L P 2 ) ε ( x , t ) d x + σ 0 L ε T ( L , t 2 r + 1 ) ( L P 2 ) ε ( L , t 2 r + 1 ) d x ,
in which σ > 0 .
According to Lemma 3, the fractional derivative of V 1 ( t ) can be derived as:
D t τ t 0 c V 1 ( t ) 0 L ε T ( x , t ) ( L P 1 ) D t τ t 0 c ε ( x , t ) d x = 0 L ε T ( x , t ) ( L P 1 ) [ ( I N Ω ) 2 ε ( x , t ) x 2 + ( I N A ) ε ( x , t ) + F ( ε ( x , t ) ) ] d x .
By combining controller (3) with Lemmas 1 and 2, it can be concluded that:
0 L ε T ( x , t ) ( L P 1 Ω ) 2 ε ( x , t ) x 2 d x = ε T ( x , t ) ( L P 1 Ω ) ε ( x , t ) x | x = 0 x = L 0 L ε T ( x , t ) x ( L P 1 Ω ) ε ( x , t ) x d x k 1 ε T ( L , t ) ( L ρ ̲ L P 1 Ω ) ε ( L , t ) 0 L ε T ( x , t ) x ( L P 1 Ω ) ε ( x , t ) x d x 0.25 L 2 π 2 0 L ( ε ( x , t ) ε ( L , t ) ) T [ L ( P 1 Ω + Ω P 1 ) ] ( ε ( x , t ) ε ( L , t ) ) d x k 1 λ 2 ( L ρ ̲ L ) ε T ( L , t ) ( I N P 1 Ω ) ε ( L , t ) .
Under Lemma 5 and Assumption 2, it follows that
0 L ε T ( x , t ) ( L P 1 ) F ( ε ( x , t ) ) d x γ 1 0 L F T ( ε ( x , t ) ) ( L P 1 2 ) F ( ε ( x , t ) ) d x + 1 γ 1 0 L ε T ( x , t ) ( L I n ) ε ( x , t ) d x γ 1 χ 2 0 L ε T ( x , t ) ( L P 1 2 ) ε ( x , t ) d x + 1 γ 1 0 L ε T ( x , t ) ( L I n ) ε ( x , t ) d x = 0 L ε T ( x , t ) ( L ( γ 1 χ 2 P 1 2 + 1 γ 1 ) I n ) ε ( x , t ) d x .
By inserting (14) and (15) into Equation (13), the following result can be derived:
D t τ t 0 c V 1 ( t ) 0.25 L 2 π 2 0 L ε T ( x , t ) [ L ( P 1 Ω + Ω P 1 ) ] ε ( x , t ) d x + 0.5 L 2 π 2 0 L ε T ( x , t ) [ L ( P 1 Ω + Ω P 1 ) ] ε ( L , t ) d x 0.25 L 2 π 2 0 L ε T ( L , t ) [ L ( P 1 Ω + Ω P 1 ) ] ε ( L , t ) d x k 1 λ 2 ( L ρ ̲ L ) L 1 0 L ε T ( L , t ) ( I N P 1 Ω ) ε ( L , t ) d x + 0.5 0 L ε T ( x , t ) [ L ( P 1 A + A T P 1 ) ] ε ( x , t ) d x + 0 L ε T ( x , t ) [ L ( γ 1 χ 2 P 1 2 + 1 γ 1 I n ) ] ε ( x , t ) d x β 1 V 1 ( t ) β 1 ˜ 0 L ε T ( x , t ) ( L P 1 ) ε ( x , t ) d x + 0.5 L 2 π 2 0 L ε T ( x , t ) [ L ( P 1 Ω + Ω P 1 ) ] ε ( L , t ) d x 0.25 L 2 π 2 0 L ε T ( L , t ) [ L ( P 1 Ω + Ω P 1 ) ] ε ( L , t ) d x k 1 λ 2 ( L ρ ̲ L ) L 1 0 L ε T ( L , t ) ( I N P 1 Ω ) ε ( L , t ) d x β 1 V 1 ( t ) + 0 L ε ˜ T ( x , t ) Π ε ˜ ( x , t ) d x ,
where ε ˜ T ( x , t ) = [ ε T ( x , t ) , ε T ( L , t ) ] T and
Π 11 Π 12 Π 22 < 0 ,
in which
Π 11 = β 1 ˜ ( L P 1 ) , Π 12 = 0.25 L 2 π 2 L ( P 1 Ω + Ω P 1 ) ,
Π 22 = 0.25 L 2 π 2 L ( P 1 Ω + Ω P 1 ) k 1 λ 2 ( L ρ ̲ L ) L 1 ( I N P 1 Ω ) .
Let U 1 = P 1 1 and M 1 = k 1 P 1 1 ; based on Equations (5) and (6), it can be concluded that Π < 0 . Therefore, D t τ t 0 c V 1 ( t ) β 1 V 1 ( t ) . By Lemma 4, one has V 1 ( t ) V 1 ( 0 ) R τ ( k t τ ) , for any t 0 .
The fractional derivative of V 2 ( t ) can be derived as:
D t τ t 0 c V 2 ( t ) β 2 V 2 ( t ) β 2 ˜ 0 L ε T ( x , t ) ( L P 2 ) ε ( x , t ) d x σ 0 L ε T ( L , t 2 r + 1 ) ( L P 2 ) ε ( L , t 2 r + 1 ) d x + 0.5 L 2 π 2 0 L ε T ( x , t ) [ L ( P 2 Ω + Ω P 2 ) ] ε ( L , t ) d x 0.25 L 2 π 2 0 L ε T ( L , t ) [ L ( P 2 Ω + Ω P 2 ) ] ε ( L , t ) d x k 2 λ 2 ( L ρ ̲ L ) L 1 0 L ε T ( L , t ) ( I N P 2 Ω ) ε ( L , t 2 r + 1 ) d x β 2 ˜ V 2 ( t ) + 0 L ε ^ T ( x , t ) Σ ε ^ ( x , t ) d x ,
where ε ^ ( x , t ) = [ ε T ( x , t ) , ε T ( L , t ) , ε T ( L , t 2 d + 1 ) ] T and
Σ 11 Σ 12 0 Σ 22 Σ 23 Σ 33 < 0 ,
in which
Σ 11 = β 2 ˜ ( L P 2 ) , Σ 12 = 0.25 L 2 π 2 L ( P 2 Ω + Ω P 2 ) ,
Σ 22 = 0.25 L 2 π 2 L ( P 2 Ω + Ω P 2 ) , Σ 23 = 0.5 k 2 λ 2 ( L ρ ̲ L ) L 1 ( I N P 2 Ω ) ,
Σ 33 = σ ( L P 2 ) .
Let U 2 = P 2 1 and M 2 = k 2 P 2 1 ; based on Equations (7) and (8), it can be concluded that Σ < 0 . Therefore, D t τ t 0 c V 2 ( t ) β 2 V 2 ( t ) . By Lemma 4, one has V 2 ( t ) V 2 ( 0 ) R τ ( k t τ ) , for any t 0 .
Based on (16) and (17), it follows
D t τ t 0 c V ( t ) e β 1 ( t t 2 r ) V ( t 2 r ) , t [ t 2 r , t 2 r + 1 ) , e β ˜ 2 ( t t 2 r + 1 ) V ( t 2 r + 1 ) , t [ t 2 r + 1 , t 2 ( r + 1 ) ) .
Let μ 1 = λ m a x ( P 1 ) λ m i n ( P 2 ) , μ 2 = 2 λ m a x ( P 2 ) λ m i n ( P 1 ) , then
V ( t 2 r ) μ 1 0 s ε T ( s , t 2 r ) ( Ξ P 1 ) ε ( s , t 2 r ) d s + μ 2 0 s ε ^ T ( s , t 2 r ) ( I N P 2 ) ε ^ ( s , t 2 r ) d s μ 1 V ( t 2 r ) ,
and using a similar derivation, it can be concluded that V ( t 2 r + 1 ) μ 2 V ( t 2 r + 1 ) .
When the agents are not subjected to DoS attacks, it can be deduced that:
V ( t ) e β 1 ( t t 2 r ) V ( t 2 r ) μ 1 e β 1 ( t t 2 r ) e β ˜ 2 ( t 2 r t 2 r 1 ) V ( t 2 r 1 ) μ 1 μ 2 e β 1 ( t t 2 r ) e β ˜ 2 ( t 2 r t 2 r 1 ) V ( t 2 r 1 ) ( μ 1 μ 2 ) M 1 ( t 0 , t ) e β ˜ 2 M 2 ( t 0 , t ) e β 1 ( t t 0 M 1 ( t 0 , t ) ) V ( t 0 ) .
By combining Assumptions 3 and 4, it can be concluded that:
V ( t ) ( μ 1 μ 2 ) ξ 1 e ln ( μ 1 μ 2 ) t t 0 l 1 e ζ 2 ( β 1 + β ˜ 2 ) e t t 0 l 2 ( β 1 + β ˜ 2 ) e β 1 ( t t 0 ) V ( t 0 ) ( μ 1 μ 2 ) ξ 1 e ( ln ( μ 1 μ 2 ) l 1 + β 1 + β ˜ 2 l 2 β 1 ) ( t t 0 ) e ζ 2 ( β 1 + β ˜ 2 ) V ( t 0 ) Φ e ( ln ( μ 1 μ 2 ) l 1 β 1 + β ˜ 2 l 2 β 1 ) ( t t 0 ) V ( t 0 ) ,
where Φ = ( μ 1 μ 2 ) ξ 1 e ζ 2 ( β 1 + η 2 ) .
When the agents is subjected to DOS attacks,
V ( t ) e β ˜ 2 ( t t 2 r + 1 ) V ( t 2 r + 1 ) μ 2 e β ˜ 2 ( t t 2 r + 1 ) V ( t 2 r + 1 ) μ 2 μ 1 e β ˜ 2 ( t t 2 r + 1 ) e β 1 ( t 2 r + 1 t 2 r ) V ( t 2 r ) ( μ 1 μ 2 ) M 1 ( t 0 , t ) e β ˜ 2 M 2 ( t 0 , t ) e β 1 ( t t 0 M 1 ( t 0 , t ) ) V ( t 0 ) Φ e ( ln ( μ 1 μ 2 ) l 1 β 1 + β ˜ 2 l 2 β 1 ) ( t t 0 ) V ( t 0 ) .
Consequently, V ( t ) Φ e κ 1 ( t t 0 ) V ( t 0 ) in which κ 1 > 0 , implying that ε ( x , t ) tends toward zero as time progresses. Therefore, controller (3) can ensure the consensus of leaderless FOMAS (1). □

4. Leader-Following FOMASs Consensus Under Boundary Control

Let o ( x , t ) denote the error, then the corresponding error dynamics can be described as follows:
D t τ t 0 c o ( x , t ) = ( I N Ω ) 2 o ( x , t ) x 2 + ( I N A ) o ( x , t ) + F ( o ( x , t ) ) , o ( x , t ) x x = 0 = 0 , o ( x , t ) x x = L = u ( t ) , o ( x , t ) = o 0 ( x , t ) ,
where o i ( x , t ) = z i ( x , t ) y ( x , t ) .
The boundary controller is designed as follows:
u i ( t ) = k t ρ i ( t ) [ j = 1 N g i j ( z i ( L , t ) z j ( L , t ) ) + ψ i ( y ( L , t ) z i ( L , t ) ) ] ,
when there is communication between the leader and the followers, ψ i > 0 ; otherwise, ψ i = 0 .
Remark 4.
This section discusses the leader-following FOMAS, where the error is defined as the difference between the follower’s state and the leader’s state, and the controller not only considers the state differences between the agents but also introduces feedback from the leader’s state, ensuring that the followers track the leader’s state, thereby achieving consensus.
Theorem 2.
Under Assumptions 1–3, using controller (19), the leader-following FOMAS (1) can achieve consensus under actuator failures and DoS attacks, if there exist positive constants θ 1 , θ 1 ˜ , γ 1 , γ 2 , σ with θ 2 > θ 2 ˜ > 0 and symmetric positive definite matrices U 3 , U 4 , M 3 , M 4 satisfying
Δ 1 U 3 γ 1 I n < 0 ,
Δ 2 U 4 γ 2 I n < 0 ,
θ 1 ˜ ( W L U 3 ) Π ¯ 12 Π ¯ 22 < 0 ,
θ 2 ˜ ( W L U 4 ) Σ ¯ 12 0 Σ ¯ 22 Σ ¯ 23 σ ( W L U 4 ) < 0 ,
κ 2 = θ 1 ln ( μ 1 μ 2 ) l 1 + θ 1 + θ ˜ 2 l 2 > 0 ,
where
Δ 1 = 0.25 L 2 π 2 ( Ω U 3 + U 3 Ω ) ( B U 3 + U 3 B T ) + γ 1 χ 2 I n + ( θ 1 + θ ˜ 1 ) U 3 ,
Δ 2 = 0.25 L 2 π 2 ( Ω U 4 + U 4 Ω ) ( B U 4 + U 4 B T ) + γ 2 χ 2 I n ( θ 2 + θ ˜ 2 ) U 4 ,
Π ¯ 12 = 0.25 L 2 π 2 W L ( Ω U 3 + U 3 Ω ) ,
Π ¯ 22 = 0.25 L 2 π 2 W L ( Ω U 3 + U 3 Ω ) k 1 λ 2 ( W L ρ ̲ W L ) L 1 ( I N Ω M 3 ) ,
Σ ¯ 12 = 0.25 L 2 π 2 W L ( Ω U 4 + U 4 Ω ) , Σ ¯ 22 = 0.25 L 2 π 2 W L ( Ω U 4 + U 4 Ω ) ,
Σ ¯ 23 = 0.5 k 2 λ 2 ( W L ρ ̲ W L ) ( I N Ω U 4 ) , M 3 = k 1 U 3 , M 4 = k 2 U 4 .
Proof. 
Given that some steps in the proof are similar to the derivation in Theorem 1, this section will focus solely on explaining the differences between the two for simplicity. Choose the Lyapunov function given below:
V ( t ) = V 3 ( t ) , t [ t 2 r , t 2 r + 1 ) , V 4 ( t ) , t [ t 2 r + 1 , t 2 ( r + 1 ) ) .
When t [ t 2 r , t 2 r + 1 ) , V ( t ) = V 3 ( t ) , that is:
V 3 ( t ) = 1 2 0 L o T ( x , t ) ( W L P 3 ) o ( x , t ) d x ,
where W L = L + d i a g [ ψ 1 , ψ 2 , , ψ N ] .
When t [ t 2 r + 1 , t 2 ( r + 1 ) ) , V ( t ) = V 4 ( t ) , that is:
V 4 ( t ) = 1 2 0 L o T ( x , t ) ( W L P 4 ) o ( x , t ) d x + σ 0 L o T ( L , t 2 r + 1 ) ( W L P 4 ) o ( L , t 2 r + 1 ) d x .
By combining the proof process of Theorem 1, it can be concluded that
D t τ t 0 c V 3 ( t ) θ 1 V 3 ( t ) θ 1 ˜ 0 L o T ( x , t ) ( W L P 3 ) o ( x , t ) d x + 0.5 L 2 π 2 0 L o T ( x , t ) [ W L ( P 3 Ω + Ω P 3 ) ] o ( L , t ) d x 0.25 L 2 π 2 0 L o T ( L , t ) [ W L ( P 3 Ω + Ω P 3 ) ] o ( L , t ) d x k 1 λ 2 ( W L ρ ̲ W L ) L 1 0 L o T ( L , t ) ( I N P 3 Ω ) o ( L , t ) d x θ 1 V 3 ( t ) + 0 L o ˜ T ( x , t ) Π o ˜ ( x , t ) d x ,
where o ˜ ( x , t ) = [ o T ( x , t ) , o T ( L , t ) ] T and
Π 11 Π 12 Π 22 < 0 ,
in which
Π 11 = θ 1 ˜ ( W L P 3 ) , Π 12 = 0.25 L 2 π 2 W L ( P 3 Ω + Ω P 3 ) ,
Π 22 = 0.25 L 2 π 2 W L ( P 3 Ω + Ω P 3 ) k 1 λ 2 ( W L ρ ̲ W L ) L 1 ( I N P 3 Ω ) .
Let U 3 = P 3 1 and M 3 = k 1 P 3 1 , similar to the proof process of Theorem 1, V 3 ( t ) V 3 ( 0 ) R τ ( k t τ ) , for any t 0 .
The fractional derivative of V 4 ( t ) can be derived as:
D t τ t 0 c V 4 ( t ) θ 2 V 4 ( t ) θ 2 ˜ 0 L o T ( x , t ) ( W L P 4 ) o ( x , t ) d x σ 0 L o T ( L , t 2 d + 1 ) ( W L P 4 ) o ( L , t 2 d + 1 ) d x + 0.5 L 2 π 2 0 L o T ( x , t ) [ W L ( P 4 Ω + Ω P 4 ) ] o ( L , t ) d x 0.25 L 2 π 2 0 L o T ( L , t ) [ W L ( P 4 Ω + Ω P 4 ) ] o ( L , t ) d x k 2 λ 2 ( W L ρ ̲ W L ) L 1 0 L o T ( L , t ) ( I N P 4 Ω ) o ( L , t 2 d + 1 ) d x θ 2 ˜ V 4 ( t ) + 0 L o ^ T ( x , t ) Σ o ^ ( x , t ) d x ,
where o ^ ( x , t ) = [ o T ( x , t ) , o T ( L , t ) , o T ( L , t 2 r + 1 ) ] T and
Σ 11 Σ 12 0 Σ 22 Σ 23 Σ 33 < 0 ,
in which
Σ 11 = θ 2 ˜ ( W L P 4 ) , Σ 12 = 0.25 L 2 π 2 W L ( P 4 Ω + Ω P 4 ) ,
Σ 22 = 0.25 L 2 π 2 W L ( P 4 Ω + Ω P 4 ) , Σ 23 = 0.5 k 2 λ 2 ( W L ρ ̲ W L ) ( I N P 4 Ω ) ,
Σ 33 = σ ( W L P 4 ) .
Let U 4 = P 4 1 and M 4 = k 1 P 4 1 , similar to the proof process of Theorem 1, V 4 ( t ) V 4 ( 0 ) R τ ( k t τ ) , for any t 0 , and
V ( t ) ( μ 1 μ 2 ) ξ 1 e ln ( μ 1 μ 2 ) t t 0 l 1 e ζ 2 ( θ 1 + θ ˜ 2 ) e t t 0 l 2 ( θ 1 + θ ˜ 2 ) e θ 1 ( t t 0 ) V ( t 0 ) ( μ 1 μ 2 ) ξ 1 e ( ln ( μ 1 μ 2 ) l 1 + θ 1 + θ ˜ 2 l 2 θ 1 ) ( t t 0 ) e ζ 2 ( θ 1 + θ ˜ 2 ) V ( t 0 ) Φ e ( ln ( μ 1 μ 2 ) l 1 θ 1 + θ ˜ 2 l 2 θ 1 ) ( t t 0 ) V ( t 0 ) ,
where Φ = ( μ 1 μ 2 ) ξ 1 e ζ 2 ( θ 1 + η 2 ) .
V ( t ) e θ ˜ 2 ( t t 2 r + 1 ) V ( t 2 r + 1 ) μ 2 e θ ˜ 2 ( t t 2 r + 1 ) V ( t 2 r + 1 ) μ 2 μ 1 e θ ˜ 2 ( t t 2 r + 1 ) e θ 1 ( t 2 r + 1 t 2 r ) V ( t 2 r ) ( μ 1 μ 2 ) M 3 ( t 0 , t ) e θ ˜ 2 M 4 ( t 0 , t ) e θ 1 ( t t 0 M 3 ( t 0 , t ) ) V ( t 0 ) Φ e ( ln ( μ 1 μ 2 ) l 1 θ 1 + θ ˜ 2 l 2 θ 1 ) ( t t 0 ) V ( t 0 ) .
Consequently, V ( t ) Φ e κ 2 ( t t 0 ) V ( t 0 ) in which κ 2 > 0 , implying that o ( x , t ) tends toward zero as time progresses. Therefore, controller (19) can ensure the consensus of leader-following FOMAS (1). □
Remark 5.
In traditional distributed control methods, each agent typically relies on its internal state information for control [47,48]. This means that each agent needs to deploy sensors to collect its state and transmit this data over the network to other agents or a central controller. As the system scales, this approach faces significant communication and computational burdens. Additionally, the demand for sensors and actuators increases, leading to higher hardware costs and greater control complexity. In contrast, the boundary control method proposed in this paper significantly reduces the reliance on internal state information. By using only boundary data for control, the boundary control approach eliminates the need for each agent to report its internal state fully. This allows the system to reduce its dependency on sensors and actuators, thereby lowering hardware costs. Furthermore, the computational burden of boundary control is reduced, as it minimizes the amount of communication required and eliminates the need for real-time monitoring of each agent’s internal state. As a result, boundary control not only reduces the need for complex sensor networks but also minimizes the computational and communication load associated with state feedback.
Remark 6.
To enhance the system’s resilience and stability under DoS attacks, this paper introduces a buffering mechanism that allows agents to temporarily store the last valid control signal received during the attack. When the attack ends and communication is restored, the system uses the data in the buffer to continue updating the agents’ states. Thus, even during the attack, when new control signals cannot be received, the agents can continue to operate, preventing system stagnation or inconsistency due to information loss. Furthermore, by adopting a boundary-based control strategy, the system significantly reduces its dependence on internal state information, lowering control complexity and communication overhead, which helps maintain stability even during external attacks or internal communication interruptions.
Remark 7.
This strategy is applicable to real scenarios such as distributed robotics, smart transportation, and management of infrastructure that require communication attack resistance and hardware fault tolerance. By decreasing the dependence on internal state knowledge, fault tolerance can be improved.

5. Numerical Simulation

This section validates the correctness of the proposed method with two numerical examples.
Example 1.
The verification of Theorem 1 is achieved by introducing a leaderless FOMAS consisting of four agents with the following characteristics: ρ ̲ = d i a g ( [ 0.25 , 0.25 , 0.25 , 0.25 ] ) ,   L = 1 ,   n = 2 ,   i { 1 , 2 , 3 , 4 } ,   τ = 0.95 ,   f ( z ( x , t ) ) = tanh ( z ( x , t ) ) ,   β 1 = 4.4 ,   β 1 ˜ = 1.1 ,   β 2 = 13 ,   β 2 ˜ = 9.4 ,   σ = 5.0 ,   l 1 = 3.2 ,   l 2 = 3.9 ,   Ω = 3 0 0 3 ,   A = 1.1 0.5 1 2.3 , and the coupling matrix G 1 2 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 .
The initial conditions are as follows:
z 1 0 ( x ) = 0.75 cos ( 1.2 π x π / 4 ) 1.0 , 0.6 sin ( 1.1 π x + π / 3 ) T , z 2 0 ( x ) = 0.4 log ( 1 + x ) + 1.1 , 0.3 x 3 T , z 3 0 ( x ) = 0.5 tanh ( 0.6 π x ) 0.8 , 0.4 cosh ( 0.4 π x + π / 5 ) + 0.7 T , z 4 0 ( x ) = 0.6 x 2 0.2 , 0.3 sec ( 0.5 π x ) T ,
and the actuator efficiency coefficient is:
ρ 1 ( t ) = | 0.8 0.15 sin ( π t ) | , ρ 2 ( t ) = 0.5 + 0.2 sin ( 2 π t 5 ) , ρ 3 ( t ) = 0.6 + 0.2 tanh ( 2 sin π t 2 ) , ρ 4 ( t ) = 0.7 0.2 e t 4 .
Figure 1 shows the variation in actuator efficiency over time for four agents in different time periods. The red, green, blue, and magenta curves represent the actuator efficiencies ρ i ( t ) of the four agents (agent1–agent4) as functions of time t. Each agent’s efficiency curve exhibits different fluctuations, reflecting the dynamic changes in actuator performance. By introducing actuator efficiency, we can simulate the stability challenges that the system may face in real-world applications due to actuator faults or performance degradation.
Figure 1. Variation of actuator performance over time in Example 1.
Figure 2 shows the control input signals u 1 ( t ) and u 2 ( t ) experiencing severe fluctuations during the attack, indicating that the system cannot receive normal control signals under a DoS attack, leading to abnormal system responses. However, it can also be observed that after the attack ends, the control signals gradually stabilize, suggesting that the control strategy proposed in this paper allows the system to recover to its normal state.
Figure 2. The trajectory of controller (3) over time in Example 1.
According to Theorem 1, when the agents are not under a DoS attack, the control gain of the controller is k 1 = 6.4137, and when the agents are subjected to a DoS attack, the control gain of the controller is k 2 = 8.2516. The response characteristics of the boundary controller at different time points are shown in Figure 2.
Remark 8.
Under normal conditions, agents continuously adjust their state by processing valid inputs. When a DoS attack occurs, information transmission is blocked. Agents can only rely on the last successfully transmitted value. This value is stored in the buffer. During the attack, agents maintain this value until the next successful transmission. This paper proposes a scheme that allows flexible adjustment of the buffer size, with the agent always operating based on the most recent valid data stored in the buffer.
Figure 3 illustrates the variation in error among the agents in this FOMAS without a controller. The color gradient (blue-green-yellow) represents a continuous increase in values. Over time, the error continues to increase. This indicates that without an effective control mechanism, the system cannot maintain consistency, and the error keeps accumulating over time. Figure 4 demonstrates the variation in the error between the agents in this FOMAS when controller (3) is applied. Although there are some fluctuations in the initial phase, the error gradually stabilizes over time and eventually achieves consistency, effectively controlling the error between the agents. This demonstrates the effectiveness of the control method proposed in this paper.
Figure 3. The variation of the error ε ( x , t ) without a controller in Example 1.
Figure 4. The variation of error ε ( x , t ) under controller (3) in Example 1.
Example 2.
The verification of Theorem 2 is achieved by introducing a leader-following FOMAS consisting of four agents with the following characteristics: ρ ̲ = d i a g ( [ 0.35 , 0.35 , 0.35 , 0.35 ] ) ,   L = 1 ,   n = 2 ,   i { 1 , 2 , 3 , 4 } ,   τ = 0.95 ,   f ( z ( x , t ) ) = tanh ( z ( x , t ) ) , β 1 = 5.6 ,   β 1 ˜ = 1.3 ,   β 2 = 12 ,   β 2 ˜ = 9.7 ,   σ = 4.0 ,   l 1 = 3.4 ,   l 2 = 4.2 ,   Ω = 4 0 0 4 ,   A = 1.3 0.5 1 2.4 , and the initial conditions are as follows:
z 1 0 ( x ) = 0.55 arctan ( 0.3 π x ) 1.4 , 0.45 e 0.4 x 1.6 T , z 2 0 ( x ) = 0.5 e 0.3 x , 0.3 log ( 1 + x 2 ) 0.95 T , z 3 0 ( x ) = 0.35 ( x 2 0.5 x ) , | x + 0.2 | + 0.65 T , z 4 0 ( x ) = 0.75 tanh ( 0.5 π x ) 1.3 , log ( 1 + | x | ) 0.8 T ,
and the actuator efficiency coefficient is:
ρ 1 ( t ) = 0.85 0.2 ( 1 + cos π t 4 ) , ρ 2 ( t ) = 0.6 + 0.2 sin ( 2 π t 3 ) , ρ 3 ( t ) = 0.5 + 0.5 1 + e 1.4 cos ( t 2.5 ) , ρ 4 ( t ) = 0.8 0.3 e t 2 .
Figure 5 shows the variation in actuator efficiency over time for four agents in different time periods. Each agent’s efficiency curve exhibits different fluctuations, reflecting the dynamic changes in actuator performance. By introducing actuator efficiency, we can simulate the stability challenges that the system may face in real-world applications due to actuator faults or performance degradation.
Figure 5. Variation of actuator performance over time in Example 2.
Figure 6 shows the control input signals u 1 ( t ) and u 2 ( t ) experiencing severe fluctuations during the attack, indicating that the system cannot receive normal control signals under a DoS attack, leading to abnormal system responses. However, it can also be observed that after the attack ends, the control signals gradually stabilize, suggesting that the control strategy proposed in this paper allows the system to recover to its normal state.
Figure 6. The trajectory of controller (19) over time in Example 2.
According to Theorem 2, when the agents are not under a DoS attack, the control gain of the controller is k 1 = 8.3129, and when the agents are subjected to a DoS attack, the control gain of the controller is k 2 = 9.8362. The response characteristics of the boundary controller at different time points are shown in Figure 6.
Figure 7 illustrates the variation in error among the agents in this FOMAS without a controller. The color gradient (blue-green-yellow) represents a continuous increase in values. Over time, the error continues to increase. This indicates that without an effective control mechanism, the system cannot maintain consistency, and the error keeps accumulating over time. Figure 8 demonstrates the variation in the error between the agents in this FOMAS when controller (19) is applied. Although there are some fluctuations in the initial phase, the error gradually stabilizes over time and eventually achieves consistency, effectively controlling the error between the agents. This demonstrates the effectiveness of the control method proposed in this paper.
Figure 7. The variation of the error o(x,t) without a controller in Example 2.
Figure 8. The variation of error o ( x , t ) under controller (19) in Example 2.

6. Conclusions

This paper has investigated the consensus problem of FOMASs under DoS attacks and actuator faults and has proposed a new approach that applies boundary control strategies to address this issue. The controller relies solely on boundary measurements, thereby significantly reducing its dependence on internal sensors and actuators, as well as control complexity and communication overhead. To prevent communication interruption caused by DoS attacks, a buffering system is incorporated: during DoS attacks, it saves the last correct control input and reinstates it upon recovery, helping to prevent stability loss and consensus breakage and minimizing performance degradation and divergence danger. In addition, actuator efficiency parameters are integrated into the controller structure to allow robust performance despite varying efficiency. Future works shall extend this method to more complex settings, including time-delay networks, heterogeneous agent systems, and event-triggered communication, in an effort to further extend the applicability of the methodology to large-scale distributed intelligent systems.

Author Contributions

Conceptualization, Q.Q.; Methodology, C.Y.; Investigation, C.Y.; Writing–original draft, Q.Q.; Writing–review & editing, X.C., D.W., J.D., Y.Y. and C.Y.; Supervision, X.C., D.W., J.D. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant No. 62476117, in part by Key Project of Science and Technology Planning of Yunnan Provincial Science and Technology Department under Grant No. 202302AD080006, in part by Natural Science Foundation of Shandong Province under Grants No. ZR2022MF222, and in part by Shandong Provincial Key Research and Development Program under Grants No. 2025TSGCCZZB0870.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zeng, Q.; Nait-Abdesselam, F. Multi-Agent Reinforcement Learning-Based Extended Boid Modeling for Drone Swarms. In Proceedings of the ICC 2024—IEEE International Conference on Communications, Denver, CO, USA, 9–13 June 2024; pp. 1551–1556. [Google Scholar] [CrossRef]
  2. Yang, Y.; Xiao, Y.; Li, T. Attacks on Formation Control for Multiagent Systems. IEEE Trans. Cybern. 2022, 52, 12805–12817. [Google Scholar] [CrossRef]
  3. Antonio, G.P.; Maria-Dolores, C. Multi-Agent Deep Reinforcement Learning to Manage Connected Autonomous Vehicles at Tomorrow’s Intersections. IEEE Trans. Veh. Technol. 2022, 71, 7033–7043. [Google Scholar] [CrossRef]
  4. Xu, R.; Chen, C.J.; Tu, Z.; Yang, M.H. V2X-ViTv2: Improved Vision Transformers for Vehicle-to-Everything Cooperative Perception. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 650–662. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, T.; Cao, J.; Hussain, A. Adaptive Traffic Signal Control for large-scale scenario with Cooperative Group-based Multi-agent reinforcement learning. Transp. Res. Part C Emerg. Technol. 2021, 125, 103046. [Google Scholar] [CrossRef]
  6. Zeynivand, A.; Javadpour, A.; Bolouki, S.; Sangaiah, A.; Ja’fari, F.; Pinto, P.; Zhang, W. Traffic flow control using multi-agent reinforcement learning. J. Netw. Comput. Appl. 2022, 207, 103497. [Google Scholar] [CrossRef]
  7. Liu, G.P. Coordination of Networked Nonlinear Multi-Agents Using a High-Order Fully Actuated Predictive Control Strategy. IEEE/CAA J. Autom. Sin. 2022, 9, 615–623. [Google Scholar] [CrossRef]
  8. Li, X.; Jiang, H.; Yu, Z.; Ren, Y.; Shi, T.; Chen, S. Leader-following scaled consensus of multi-agent systems based on nonlinear parabolic PDEs via dynamic event-triggered boundary control. Inf. Sci. 2025, 719, 122449. [Google Scholar] [CrossRef]
  9. Qiu, H.; Korovin, I.; Liu, H.; Gorbachev, S.; Gorbacheva, N.; Cao, J. Distributed adaptive neural network consensus control of fractional-order multi-agent systems with unknown control directions. Inf. Sci. 2024, 655, 119871. [Google Scholar] [CrossRef]
  10. Li, H.; Liu, S.; Meng, G.; Fan, Q. Dynamic Observer-Based H-Infinity Consensus Control of Fractional-Order Multi-Agent Systems. IEEE Trans. Autom. Sci. Eng. 2025, 22, 12720–12729. [Google Scholar] [CrossRef]
  11. Zamani, H.; Khandani, K.; Majd, V.J. Fixed-time sliding-mode distributed consensus and formation control of disturbed fractional-order multi-agent systems. ISA Trans. 2023, 138, 37–48. [Google Scholar] [CrossRef]
  12. Chen, Y.; Shao, S. Discrete-Time Fractional-Order Sliding Mode Attitude Control of Multi-Spacecraft Systems Based on the Fully Actuated System Approach. Fractal Fract. 2025, 9, 435. [Google Scholar] [CrossRef]
  13. Liu, Y.; Zhang, H.; Shi, Z.; Gao, Z. Neural-Network-Based Finite-Time Bipartite Containment Control for Fractional-Order Multi-Agent Systems. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 7418–7429. [Google Scholar] [CrossRef]
  14. Zhang, X.; Zheng, S.; Ahn, C.K.; Xie, Y. Adaptive Neural Consensus for Fractional-Order Multi-Agent Systems With Faults and Delays. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 7873–7886. [Google Scholar] [CrossRef] [PubMed]
  15. Liu, Y.; Zhang, H.; Wang, Y.; Liang, H. Adaptive Containment Control for Fractional-Order Nonlinear Multi-Agent Systems With Time-Varying Parameters. IEEE CAA J. Autom. Sin. 2022, 9, 1627–1638. [Google Scholar] [CrossRef]
  16. Zhong, Y.; Yuan, Y.; Yuan, H. Nash Equilibrium Seeking for Multi-Agent Systems Under DoS Attacks and Disturbances. IEEE Trans. Ind. Inform. 2024, 20, 5395–5405. [Google Scholar] [CrossRef]
  17. Xu, H.; Zhu, F.; Ling, X. Secure bipartite consensus of leader–follower multi-agent systems under denial-of-service attacks via observer-based dynamic event-triggered control. Neurocomputing 2025, 614, 128817. [Google Scholar] [CrossRef]
  18. Wen, H.; Li, Y.; Tong, S. Distributed adaptive resilient formation control for nonlinear multi-agent systems under DoS attacks. Sci. China Inf. Sci. 2024, 67, 209201. [Google Scholar] [CrossRef]
  19. Liu, J.; Dong, Y.; Gu, Z.; Xie, X.; Tian, E. Security consensus control for multi-agent systems under DoS attacks via reinforcement learning method. J. Frankl. Inst. 2024, 361, 164–176. [Google Scholar] [CrossRef]
  20. Chen, P.; Liu, S.; Chen, B.; Yu, L. Multi-Agent Reinforcement Learning for Decentralized Resilient Secondary Control of Energy Storage Systems Against DoS Attacks. IEEE Trans. Smart Grid 2022, 13, 1739–1750. [Google Scholar] [CrossRef]
  21. Zhao, N.; Zhang, H.; Shi, P. Observer-Based Sampled-Data Adaptive Tracking Control for Heterogeneous Nonlinear Multi-Agent Systems Under Denial-of-Service Attacks. IEEE Trans. Autom. Sci. Eng. 2025, 22, 4771–4779. [Google Scholar] [CrossRef]
  22. Zhao, Y.; Sun, H.; Yang, J.; Hou, L.; Yang, D. Event-Triggered Fully Distributed Bipartite Containment Control for Multi-Agent Systems Under DoS Attacks and External Disturbances. IEEE Trans. Circuits Syst. I Regul. Pap. 2025, 72, 6011–6024. [Google Scholar] [CrossRef]
  23. Wang, J.; Yan, Y.; Liu, Z.; Chen, C.P.; Zhang, C.; Chen, K. Finite-time consensus control for multi-agent systems with full-state constraints and actuator failures. Neural Netw. 2023, 157, 350–363. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Y.; Li, H.; Sun, J.; He, W. Cooperative Adaptive Event-Triggered Control for Multiagent Systems With Actuator Failures. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1759–1768. [Google Scholar] [CrossRef]
  25. Deng, C.; Gao, W.; Che, W. Distributed adaptive fault-tolerant output regulation of heterogeneous multi-agent systems with coupling uncertainties and actuator faults. IEEE CAA J. Autom. Sin. 2020, 7, 1098–1106. [Google Scholar] [CrossRef]
  26. Li, J.; Yan, Z.; Shi, X.; Luo, X. Distributed Adaptive Formation Control for Fractional-Order Multi-Agent Systems with Actuator Failures and Switching Topologies. Fractal Fract. 2024, 8, 563. [Google Scholar] [CrossRef]
  27. Zhang, T.; Lin, M.; Xia, X.; Yi, Y. Adaptive cooperative dynamic surface control of non-strict feedback multi-agent systems with input dead-zones and actuator failures. Neurocomputing 2021, 442, 48–63. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Dong, J. Containment control of interval type-2 fuzzy multi-agent systems with multiple intermittent packet dropouts and actuator failure. J. Frankl. Inst. 2020, 357, 6096–6120. [Google Scholar] [CrossRef]
  29. Lui, D.G.; Petrillo, A.; Santini, S. Adaptive fault-tolerant containment control for heterogeneous uncertain nonlinear multi-agent systems under actuator faults. J. Frankl. Inst. 2024, 361, 107317. [Google Scholar] [CrossRef]
  30. Su, Y.; Shan, Q.; Liang, H.; Li, T.; Zhang, H. Event-Based Adaptive Optimal Fault-Tolerant Consensus Control for Uncertain Nonlinear Multiagent Systems With Actuator Failures. IEEE Trans. Syst. Man Cybern. Syst. 2025, 55, 7273–7287. [Google Scholar] [CrossRef]
  31. Long, S.; Huang, W.; Wang, J.; Liu, J.; Gu, Y.; Wang, Z. A Fixed-Time Consensus Control With Prescribed Performance for Multi-Agent Systems Under Full-State Constraints. IEEE Trans. Autom. Sci. Eng. 2025, 22, 6398–6407. [Google Scholar] [CrossRef]
  32. Jiang, Y.; Liu, L.; Feng, G. Fully distributed adaptive control for output consensus of uncertain discrete-time linear multi-agent systems. Automatica 2024, 162, 111531. [Google Scholar] [CrossRef]
  33. Xu, N.; Chen, Z.; Niu, B.; Zhao, X. Event-Triggered Distributed Consensus Tracking for Nonlinear Multi-Agent Systems: A Minimal Approximation Approach. IEEE J. Emerg. Sel. Top. Circuits Syst. 2023, 13, 767–779. [Google Scholar] [CrossRef]
  34. Hu, W.; Cheng, Y.; Yang, C. Leader-following consensus of linear multi-agent systems via reset control: A time-varying systems approach. Automatica 2023, 149, 110824. [Google Scholar] [CrossRef]
  35. Song, X.; Sun, X.; Song, S.; Zhang, Y. Secure Consensus Control for PDE-Based Multiagent Systems Resist Hybrid Attacks. IEEE Syst. J. 2023, 17, 3047–3058. [Google Scholar] [CrossRef]
  36. Liu, Y.; Zuo, Z.; Song, J.; Li, W. Fixed-time consensus control of general linear multiagent systems. IEEE Trans. Autom. Control 2024, 69, 5516–5523. [Google Scholar] [CrossRef]
  37. Peng, X.J.; He, Y.; Li, H.; Tian, S. Consensus Control and Optimization of Time-Delayed Multiagent Systems: Analysis on Different Order-Reduction Methods. IEEE Trans. Syst. Man Cybern. Syst. 2025, 55, 780–791. [Google Scholar] [CrossRef]
  38. Hao, Y.; Fang, Z.; Cao, J.; Liu, H. Consensus Control of Nonlinear Fractional-Order Multiagent Systems With Input Saturation: A T–S Fuzzy Method. IEEE Trans. Fuzzy Syst. 2024, 32, 6754–6766. [Google Scholar] [CrossRef]
  39. Seuret, A.; Gouaisbaut, F. Wirtinger-based integral inequality: Application to time-delay systems. Automatica 2013, 49, 2860–2866. [Google Scholar] [CrossRef]
  40. Qin, J.; Gao, H.; Zheng, W.X. Exponential Synchronization of Complex Networks of Linear Systems and Nonlinear Oscillators: A Unified Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 510–521. [Google Scholar] [CrossRef]
  41. Lv, Y.; Hu, C.; Yu, J.; Jiang, H.; Huang, T. Edge-Based Fractional-Order Adaptive Strategies for Synchronization of Fractional-Order Coupled Networks With Reaction–Diffusion Terms. IEEE Trans. Cybern. 2020, 50, 1582–1594. [Google Scholar] [CrossRef]
  42. Li, Y.; Chen, Y.; Podlubny, I. Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 2009, 45, 1965–1969. [Google Scholar] [CrossRef]
  43. Yang, C.; Huang, T.; Yi, K.; Zhang, A.; Chen, X.; Li, Z.; Qiu, J.; Alsaadi, F.E. Synchronization for nonlinear complex spatio-temporal networks with multiple time-invariant delays and multiple time-varying delays. Neural Process. Lett. 2019, 50, 1051–1064. [Google Scholar] [CrossRef]
  44. Liu, C.; Wang, W.; Jiang, B.; Patton, R.J. Fault-Tolerant Consensus of Multi-Agent Systems Subject to Multiple Faults and Random Attacks. IEEE Trans. Circuits Syst. I Regul. Pap. 2025, 72, 4935–4945. [Google Scholar] [CrossRef]
  45. Chen, J.; Shen, J.; Chen, W.; Li, J.; Zhang, S. Application of Robust Fuzzy Cooperative Strategy in Global Consensus of Stochastic Multi-Agent Systems. IEEE Trans. Autom. Sci. Eng. 2025, 22, 12058–12070. [Google Scholar] [CrossRef]
  46. De Persis, C.; Tesi, P. Input-to-State Stabilizing Control Under Denial-of-Service. IEEE Trans. Autom. Control 2015, 60, 2930–2944. [Google Scholar] [CrossRef]
  47. Yan, L.; Liu, Z.; Philip Chen, C.; Zhang, Y.; Wu, Z. Optimized adaptive consensus control for multi-agent systems with prescribed performance. Inf. Sci. 2022, 613, 649–666. [Google Scholar] [CrossRef]
  48. Sun, Y.; Shi, P.; Lim, C.C. Event-triggered sliding mode scaled consensus control for multi-agent systems. J. Frankl. Inst. 2022, 359, 981–998. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.