Next Article in Journal
A Bayesian Off-Grid DOA Estimation Framework for Close-Angle Scenarios
Previous Article in Journal
Novel Dual Residual-Enhanced Deep Bidirectional LSTM Network for Soft Sensing of Rare Earth Component Content
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leader–Following Fault-Tolerant Consensus Control for Multi-Agent Systems Based on Observers

1
Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 200092, China
2
School of Electronics and Information Engineering, Tongji University, Shanghai 201804, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(10), 3153; https://doi.org/10.3390/s26103153 (registering DOI)
Submission received: 8 April 2026 / Revised: 13 May 2026 / Accepted: 15 May 2026 / Published: 16 May 2026

Abstract

In this paper, for leader–follower structure multi-agent systems (MASs), a new fault-tolerant consensus control mechanism which is called the distributed information estimation and centralized control scheme is developed. To begin with, for each follower agent, an unknown input observer (UIO) is designed to obtain the asymptotic convergence state estimation. Then, a fault reconstruction (FR) method is proposed through constructing an interval observer by sensor measurement output. Most importantly, using the leader’s state estimation provided by the local observer, a distributed observer (DO) is designed so that each follower can obtain the leader’s state estimation. Subsequently, for each follower agent, by using its own state estimation and FR, and the leader’s state estimation offered by the DO, a centralized controller is designed. In this way, a DO-based distributed fault-tolerant control protocol is developed, in which the distributed feature is majorly reflected by the DO construction, resulting in the controller being formulated in a centralized way. In addition, under the DO-based distributed fault-tolerant control protocol, MAS consensus can be reached. Finally, two simulation examples are given to show the effectiveness of the proposed methods.

1. Introduction

Distributed observer (DO) is typically composed of multiple local observers, providing a method for each agent to estimate the system’s state solely based on its own measurements and information received from its neighbors. The earliest systematic work on DO was proposed in 2007 [1]. Then, it is used for real-time monitoring, collecting, and analyzing of state information of various nodes in the system, in order to achieve global or local state estimation. Distributed cooperative control plays an extremely important role in MASs, such as energy management [2], unmanned aerial vehicle (UAV) formation [3,4], satellite formation [5], intelligent transportation systems [6], and so on. The leader–following consensus problems have been investigated for years, and many significant results can be found in the literature [7,8,9,10,11,12,13]. For example, Ref. [8] addresses the leader–following reliable consensus control problem, and proposes a non-fragile distributed consensus control protocol which can achieve MAS consensus even in the presence of stochastic parameter changes, actuator faults, and controller gain perturbations. Wang et al. [9] tackle the leader–follower consensus problem in MASs by formulating it as a particle swarm optimization problem. Zhang et al. [14] investigate disturbed event-triggered fixed-time bipartite consensus for nonlinear MASs with and without a leader over signed networks. In other aspects, leader–following consensus control involves an event-triggering control problem [15], switching topology problem [16], Denial-of-Service (DoS) attack problem [17], unknown input problem [18], containment control problem [19], and so on.
Under the leader–follower structure, the state information of the leader actually acts as one of the most crucial pieces of information that characterizes the status of MASs. For leader–following consensus problems, letting the followers obtain the leader’s state information directly or indirectly via communication networks will surely bring many benefits such that the problems can be solved much more conveniently. Ref. [20] for the first time explicitly introduces DOs into MASs to solve the problem of leader state estimation. Ref. [21] introduces an auxiliary system, transforms the original problem into an augmented system that includes the dynamics of agents and the leader, and adopts a discounted performance criterion along with the RARE (Robust Algebraic Riccati Equation) to rigorously solve the robust output consensus problem. Wang et al. [22] proposes a learning-based fully distributed observer that enables each agent to simultaneously learn the unknown dynamics and states of a nonlinear leader, and then designs an adaptive distributed control law to achieve leader–following consensus in heterogeneous Euler–Lagrange MASs. Recently, Bi et al. [23] proposed a novel adaptive distributed observer that estimates both the leader’s system matrix and state under unbounded distributed communication delays, and designed a distributed controller to achieve leader–following consensus.
It is widely recognized that actuators are susceptible to failures, as they are required to operate continuously and under varying conditions over extended periods. Consequently, the study of actuator fault-tolerant control has become critically important for MASs, as it enhances system reliability and ensures stable performance even in the presence of component malfunctions. A distributed adaptive leader–follower fault-tolerant controller is proposed for nonlinear MASs with actuator faults and delays [24]. Distributed bipartite adaptive event-triggered fault-tolerant leader–following consensus is investigated for MASs with unknown actuator faults [25].  In addition, the UIO is introduced to estimate the states of systems or even unknown inputs (UIs). Notably, in the context of actuator fault-tolerant control for MASs, actuator faults can be effectively modeled as a class of UIs. This equivalence allows the UIO to serve as a powerful tool not only for state estimation but also for real-time fault reconstruction, thereby providing a solid foundation for active fault-tolerant control strategies. And some results of handling UIs have been obtained [26,27,28,29]. For example, in [27], a distributed UIO (DUIO) is designed using a distributed interval observer for multi-sensor systems to estimate the states of the target system. UIO techniques are employed to address the challenges posed by unknown disturbances in MASs [28]. In [29], by combining a reduced-order observer with an interval observer, this paper presents a framework for fault reconstruction (FR) and fault-tolerant control that addresses both actuator and sensor faults concurrently. A clear and intuitive contrast with Fault-Tolerant Control and Observer Design is shown in Table 1.
In this article, for the MAS fault-tolerate consensus problem, a DO-based control protocol constructing scheme is developed such that the distributed feature is majorly reflected in DO construction. The main contributions of this article are as follows:
(1)
Different from the work in [22,30,31], the followers considered in the present paper are subject to unknown disturbances and their states are unmeasurable. Thereby, a local UIO is designed for each follower to estimate the system state asymptotically. Moreover, through an interval observer, which is designed by sensor measurement output, an algebraic correlation linking the fault signal to the state estimation error is established. And then, an FR scheme, which can estimate the actual fault asymptotically, is developed. The proposed FR scheme can provide an asymptotically convergent estimate of the actual fault signal. This ensures that the reconstructed fault accurately approaches the true fault over time, thereby providing a highly precise baseline for generating a system compensation control scheme.
(2)
By using the state estimation of the leader which is offered by a local Luenberger-like observer and the information of the neighbors, a DO is constructed at each follower agent. Unlike fully distributed control protocols [32,33] that suffer from heavy communication burdens and complex gain-tuning coupled with network topologies, this structural separation allows the controller to be formulated in a simple centralized way, significantly reducing inter-agent communication overhead. The proposed DO is able to estimate the leader’s state asymptotically. Therefore, through the DO, each follower can access the leader’s state information. Consequently, a DO-based controller for each follower can be developed to fulfill the MAS consensus in a centralized way.
(3)
A DO-based fault-tolerant control protocol mechanism is developed through the combination of a simple feedback controller based on the state and FR. The proposed DO-based fault-tolerant control protocol can be viewed as a distributed control protocol, while the distributed feature is majorly reflected by the DO rather than the controller. In fact, because of the DO together with the local UIO, the controller becomes a simple state and FR feedback controller. More importantly, distinct from prior methods that only prove deterministic convergence, our framework mathematically establishes rigorous H performance guarantees via Linear Matrix Inequalities (LMIs) shown in Theorem 1. In this way, the asymptotic convergence MAS consensus is reached.
The rest of this article is organized as follows. In Section 2, some preliminaries and the system description are presented. In Section 3, a UIO is designed for each follower to obtain the state estimation and the FR. In Section 4, a DO is designed such that each follower is able to access the leader’s state information, and then, a DO-based control protocol scheme is proposed. In Section 5, two simulation examples are applied. Section 6 gives the conclusions.
Table 1. Standardized methods summary of fault-tolerant control and observer design.
Table 1. Standardized methods summary of fault-tolerant control and observer design.
PaperSystemFault TypesObserverControl ArchitectureStability Guarantees
[34]NonlinearActuatorFOPartially centralizedFinite-time
[35]NonlinearFully distributedFinite-time
[36]LinearESOPartially centralizedExponential
[29]LinearActuator, SensorLuenberger/IntervalPartially centralizedAsymptotic
ProposedLinearActuatorUIO, Interval, DOPartially centralizedAsymptotic

2. Preliminaries and System Description

In this section, the basic graph theory is introduced, and some notations are introduced. System description and lemmas are presented.

2.1. Graph Theory

The communication between agents can be represented by a topology graph G = ( V , E , A ) , where V = { 1 , , N } represents the set of nodes, and E V × V is the edge set. A = [ a i j ] R N × N denotes the adjacent matrix of G . ( i , j ) E means that agent i can receive the information from agent j, and agent j is a neighbor node of agent i. If ( i , j ) E , we set a i j > 0 ; otherwise, a i j = 0 . The Laplacian matrix L = [ l i j ] R N × N is defined as l i j = a i j for i j and l i i = j = 1 N a i j ( j N = { 1 , , N } ) . Obviously, if we always assume that a i i = 0 and define B = diag i N l i i , then we have L = B A . Agent 0 is the leader agent. For follower agent i, if it can obtain the state information of the leader then a i 0 = 1 ; otherwise, a i 0 = 0 . Define matrix H = L + D , where D = diag i N a i 0 .

2.2. Notations

The Kronecker product is represented by the sign ⊗. · stands for the Euclidean norm of a vector. I N and 1 N represent an N × N identity matrix and N × 1 vector with all entries being one, respectively. Define a matrix S = [ s i j ] R n × m ; let S + = [ s i j + ] with s i j + = max { 0 , s i j } and S = [ s i j ] with s i j = max { s i j , 0 } ; then obviously, we have S = S + S , | S | = S + + S . S is generalized inverse of S satisfying S S S = S . Define another matrix R = [ r i j ] R n × m ; then S R if and only if s i j r i j ( i = 1 , , n ; j = 1 , , m ) .

2.3. System Description

Consider MASs consisting of N + 1 agents: one leader and N followers. Suppose we have the follower system
x ˙ i = A x i + B u f , i + ω i y i = C x i , i = 1 , , N
where A , B and C are known constant matrices with appropriate dimensions. x i R n , y i R p and ω i R m are respectively the system state, sensor measurement output and external disturbance of the ith agent. u f , i R m represents the actuation signal, formulated as u f , i = η i u i + f a , i , where η i R and f a , i R m denote the efficiency factor and bias signal of actuator faults, respectively. The actuator operates nominally provided that η i = 1 and f a , i = 0 ; conversely, any other values signify an actuation failure. Incorporating these fault parameters, the initial plant (1) is mathematically described by
x ˙ i = A x i + B u i + d i y i = C x i , i = 1 , , N
where u i stands for actuator control input, and d i = ( η i 1 ) u i + f a , i + ω i can be conceptually viewed as a multiple unknown input (MUI) within the dynamics in (2). The leader agent is described by
x ˙ 0 = A x 0 + B u 0 y 0 = C x 0
where x 0 R n , u 0 R m and y 0 R p are respectively the state, control input, and sensor measurement output of the leader.
The leader–follower consensus condition is defined as follows.
Definition 1.
Consider MASs (2) and (3), if
lim t x i ( t ) x 0 ( t ) = 0 , i = 1 , 2 , , N .
Then, it is said that the MASs (2) and (3) achieve consensus.
Assumption A1
([37]). System (2) is a minimum phase system; that is, the following rank condition
rank s I n A B C 0 = n + m
is satisfied for all complex s with R e ( s ) 0 .
Assumption A2
([37]). The observer conditions rank ( C B ) = rank ( B ) = m holds.
Assumption A3.
The pair ( A , B ) is stabilizable.
Assumption A4.
The signed directed graph G associated follower is strongly connected.
Assumption A5
([38]). The external disturbance ω i ( t ) is bounded with ω ̲ i ω i ( t ) ω ¯ i , where ω ̲ i and ω ¯ i are known constant vectors. The initial state x i ( 0 ) is unknown but bounded with x ̲ i , 0 x i ( 0 ) x ¯ i , 0 , while x ̲ i , 0 and x ¯ i , 0 are constant known vectors. The actuator control input u i , efficiency parameter η i , and bias signal f a , i are bounded with u ̲ i u i ( t ) u ¯ i , η ̲ i η i ( t ) η ¯ i , f ̲ a , i f a , i ( t ) f ¯ a , i respectively, where u ̲ i , u ¯ i , f ̲ a , i and f ¯ a , i are known constant vectors, and η ̲ i and η ¯ i are known constant scalars.
Under Assumption 5, we have d ̲ i d i ( t ) d ¯ i , where d ̲ i = η ̲ i u ̲ i u ¯ i + f ̲ a , i + ω ̲ i and d ¯ i = η ¯ i u ¯ i u ̲ i + f ¯ a , i + ω ¯ i . In practice, physical parameter variations and unmodeled dynamics are inevitable. This paper lumps these internal uncertainties and external disturbances into the bounded disturbance term ω i ( t ) . Under Assumption 5, the proposed DO-based protocol actively compensates for these combined uncertainties.
Remark 1.
Assumptions 1 and 2 are the so-called minimal phase condition and observer matching condition, respectively. The former guarantees the convergence property of the state observer error system, while the latter means that the faults can be measured through the system outputs.
Remark 2.
The proposed strategy provides a unified solution for the diverse fault conditions outlined in Table 2 by constructing an augmented MUI: d i = ( η i 1 ) u i + f a , i + ω i . The primary methodology relies on synthesizing an MUIR to asymptotically estimate the actual d i , which is subsequently integrated into the compensation controller to guarantee the asymptotic stability of the closed-loop system. It is worth emphasizing that, in the case of η i 1 , the constructed MUI explicitly contains the control input u i . Due to coupling between d i and u i , successfully executing the aforementioned two-step compensation strategy poses a significant challenge.

3. Local Observer Design with FR

In this section, local observers are designed to obtain the state estimations of both the leader and followers. In addition, an FR design method is proposed.

3.1. Local Observer for Leader

A Luenberger observer is designed for (3)
x ^ ˙ 0 = A x ^ 0 + B u 0 + L ( y 0 C x ^ 0 )
to obtain the state estimation of the leader. The error dynamic system between (3) and (4) is
x ˜ ˙ 0 = ( A L C ) x ˜ 0
where x ˜ 0 = x 0 x ^ 0 . In addition, we design the observer-based state feedback controller for the leader as
u 0 = K 0 x ^ 0
Consequently, the closed-loop dynamics under the observer-based state feedback can be expressed as:
x ˙ 0 = ( A + B K 0 ) x 0 B K 0 x ˜ 0

3.2. UIO and FR Designs for the Followers

By the observer matching condition in Assumption 2, ( C B ) = [ ( C B ) T C B ] 1 ( C B ) T exists. Then, we have ( I n + S C ) B = 0 where S = B ( C B ) + Γ ( I p C B ( C B ) ) and Γ R n × p is an arbitrary matrix. Therefore, multiplying the matrix ( I n + S C ) by the state equation of (2) from the left direction gives x ˙ i = ( I n + S C ) A x i S y ˙ i . Furthermore, by a state shift transformation of ζ i = x i + S y i , System (2) is transformed into
ζ ˙ i = A ζ i A S y i y i = C ζ i
where A = I n + S C A and y i = I p + C S y i .
Lemma 1.
Under Assumption 1, the pair ( A , C ) is detectable with A = ( I n B C B C ) A .
Proof: Refer to Appendix A.
Now, design a UIO for (8) as follows:
ζ ^ ˙ i = A ζ ^ i A S y i + L ( y i C ζ ^ i ) x ^ i = ζ ^ i S y i
Then, the UIO error dynamic system can be obtained by subtracting (9) from (8):
ζ ˜ ˙ i = ( A L C ) ζ ˜ i
where ζ ˜ = ζ i ζ ^ i . Therefore, (9) is a UIO of (8) or (2) by designing the observer gain matrix L such that all the eigenvalues of A L C are with negative real parts, and the existence of the gain matrix L can be guaranteed by Lemma 1. That is ζ ^ i and x ^ i are the asymptotic convergent estimations of ζ i and x i , respectively. In addition, denoting x ˜ i = x i x ^ i , we have x ˜ i = ζ ˜ i . Moreover, the overall system of (10) is
ζ ˜ ˙ = [ I N ( A L C ) ] ζ ˜
where ζ ˜ = ζ ˜ 1 T ζ ˜ N T T .
After we have obtained the state estimation of the ith follower agent, next, we plan to propose an FR via an interval observer by sensor measurement output. To begin with, based on the sensor measurement output of (2), we have
y ˙ i = C A x i + C B ( u i + d i )
We design an interval observer for (12) as follows:
y ¯ ˙ i = C A x ^ i + C B u i + ( C B ) + d ¯ i ( C B ) d ̲ i + Q ( y ¯ i y i ) y ̲ ˙ i = C A x ^ i + C B u i + ( C B ) + d ̲ i ( C B ) d ¯ i + Q ( y ̲ i y i )
where x ^ i is the state estimation provided by UIO (9), and Q R p × p is an arbitrary Metzler and Hurwitz matrix. The error equation of the interval observer is derived from (12) and (13)
ε ¯ ˙ i = Q ε ¯ i C A x ˜ i + ( C B ) + d ¯ i ( C B ) d ̲ i C B d i ε ̲ ˙ i = Q ε ̲ i C A x ˜ i + C B d i ( C B ) + d ̲ i + ( C B ) d ¯ i
where ε ¯ i = y ¯ i y i and ε ̲ i = y i y ̲ i .
Lemma 2
([39]). Suppose that x ̲ i ( t ) x i ( t ) x ¯ i ( t ) with x i ( t ) R n , x ¯ i ( t ) R n and x ̲ i ( t ) R n , and W R m × n is a constant matrix; then W + x ̲ i ( t ) W x ¯ i ( t ) W x i ( t ) W + x ¯ i ( t ) W x ̲ i ( t ) .
Lemma 3
([39]). For a dynamic system x ˙ = Θ x + ρ ( t ) , if Θ is Metzler, x ( 0 ) 0 and ρ ( t ) 0 , then x ( t ) 0 ( t 0 ) .
Lemma 4.
Under Assumptions 1 and 5, system (13) is an interval observer of system (12) such that y ̲ i ( t ) y i ( t ) y ¯ i ( t ) holds for all t 0 , if the initial values are set as y ¯ i ( 0 ) = C + x ¯ i C x ̲ i and y ̲ i ( 0 ) = C + x ̲ i C x ¯ i , and Q is a Hurwitz and Metzler matrix chosen arbitrarily.
Proof: Refer to Appendix B.
Since y ̲ i ( t ) y i ( t ) y ¯ i ( t ) , there exists a time-varying vector α i ( t ) = [ α i 1 ( t ) , , α i p ( t ) ] T R p , satisfying 0 α i j ( t ) 1 , such that
y i = diag ( y i ) α i + y ̲ i
where y i = y ¯ i y ̲ i . In addition, (15) gives
α i = diag 1 ( y i + γ i ) diag ( γ i ) ( y i y ̲ i )
where γ i R p and γ i , j = 1 if y i , j = 0 ; otherwise, γ i , j = 0 ( j = 1 , , p ) . Here γ i , j and y i , j are the jth element of the vectors of γ i and y i , respectively. Differentiating (15), we obtain
y ˙ i = diag ( y ˙ ) α i + diag ( y i ) α ˙ i + y ̲ i ˙
It follows from (13) that
y ˙ i = ϕ 1 = Q y i + C B d i
where d i = d ¯ i d ̲ i . In addition, the second equation of (13) is
y ̲ ˙ i = C A x ^ i + C B u i + ϕ 2
where ϕ 2 = ( C B ) + d ̲ i ( C B ) d ¯ i + Q ( y ̲ i y i ) . Now, substituting (18) and (19) into (17) yields
y ˙ i = diag ( ϕ 1 ) α i + diag ( y i ) α ˙ i + C A x ^ i + C B u i + ϕ 2
Comparing (12) with (20), we obtain
C B d i = diag ( ϕ 1 ) α i + diag ( y i ) α ˙ i C A x ˜ i + ϕ 2
Using the rank condition in Assumption 1 again, (21) gives
d i = ( C B ) [ diag ( ϕ 1 ) α i + diag ( y i ) α ˙ i + ϕ 2 C D x ˜ i ]
By setting x ˜ i = 0 in (22), an FR is provided as
d ^ i = ( C B ) [ diag ( ϕ ) α i + diag ( y i ) α ˙ ^ i + ϕ 2 ]
where α ˙ ^ i represents the exact finite-time estimate of α ˙ i provided by the differentiator [40]:
ξ ˙ 1 , i j = κ 1 , i j = ρ 1 , i j ξ 1 , i j α i j 1 / 2 sign ξ 1 , i j α i j + ξ 2 , i j ξ ˙ 2 , i j = ρ 2 , i j sign ξ 2 , i j κ 1 , i j , ( j = 1 , , p )
where ρ 1 , i j > 0 and ρ 2 , i j > 0 are positive scalar gains. ξ 2 , i j is the identical estimation of α ˙ i j in a finite time. Therefore, α ˙ ^ i = ξ 2 , i 1 ξ 2 , i p T is the identical estimation of α ˙ i in a finite time. That is, α ˙ i ( t ) α ˙ ^ i ( t ) ; then it follows from (22) and (23) that
d ˜ i = ( C B ) C A x ˜ i
The FR offered by (23) is able to estimate the actual unknown input d i asymptotically. In fact, lim t d ˜ i ( t ) = ( C B ) C A lim t x ˜ i ( t ) = 0 , where d ˜ i = d i d ^ i .
Remark 3.
The proposed FR (23) shows several advantages. First, it provides an asymptotic convergence estimation of the actual fault. Secondly, it does not rely on derivative information of the fault. Thirdly, the FR decouples the control input u i of the ith follower; thereby, an FR-based compensating controller is effectively derived.

4. DO Design and DO-Based Fault-Tolerant Control Protocol

4.1. DO Design

In this section, for each follower agent, we try to design a DO such that each follower agent can obtain the state estimation of the leader’s. And then, a DO-based control protocol scheme is developed.
For the ith follower agent, consider
x ˙ 0 , i = A x 0 , i + M j = 1 N a i j x 0 , i x 0 , j + a i 0 x 0 , i x ^ 0
where x ^ 0 is provided by a local observer (4). Denote x 0 , i = x 0 x 0 , i ; then the dynamic system of x 0 , i is obtained by subtracting (26) from (3)
x ˙ 0 , i = A x 0 , i M a i 0 x ˜ 0 + B u 0 + M j = 1 N a i j x 0 , i x 0 , j + a i 0 x 0 , i
The overall system of (27) is
x ˙ 0 = I N A + M H x 0 + 1 N B u 0 D 1 N M x ˜ 0
where x 0 = x 0 , 1 T x 0 , N T T .
Remark 4.
Note that the leader’s control input u 0 is local information to the leader and only those followers which are connected to the leader directly can access the information of the leader’s. Therefore, in view of the distributed structure, the DO cannot involve the u 0 in Equation (26).
Remark 5.
Generally, since not every follower can directly obtain the state information of the leader’s referring to the communication topology, constructing distributed control protocols becomes the major method for MAS consensus problems [41,42,43,44]. In the present paper, a DO-based control protocol is developed, and the distributed feature is majorly reflected by the DO. Therefore, the DO (26) plays an important role in the designs.
Remark 6.
It is worth noting the fundamental distinction between fully distributed and partially centralized (DO-based) control architectures. Fully distributed protocols directly embed the communication topology within the feedback control loop. Under severe local disturbances, this continuous state exchange can exacerbate fault propagation and communication burdens. In contrast, the proposed partially centralized scheme decouples the network topology from the local controller by strictly confining distributed interactions to the DO (26). Consequently, local state estimation and fault-tolerant control are executed independently, making it highly suitable for practical leader–following MASs (2) and (3).

4.2. DO-Based Fault-Tolerant Control Protocol

In this subsection, we design a DO-based fault-tolerant control protocol to ensure that the multi-agent system reaches consensus according to Definition 1.
Define χ i = x i x 0 ; then its dynamic is
χ i ˙ = A χ i + B ( u i + d i u 0 )
Now, based on the DO (26), the UIO (9) and the FR (23), we propose an observer-based state feedback control scheme with fault compensation by
u i = K χ ^ i d ^ i
where χ ^ i = x ^ i x 0 , i ; then the closed-loop system of (29) is
χ ˙ i = ( A + B K ) χ i B K x ˜ i + B K x 0 , i + B d ˜ i B u 0
Substitute (25) into (31), and noticing that x ˜ i = ζ ˜ i , we have
χ ˙ i = ( A + B K ) χ i + B K x 0 , i B ( ( C B ) C A + K ) ζ ˜ i B u 0
The overall system of (32) is
χ ˙ = I N ( A + B K ) χ + I N B K x 0 I N B ( C B ) C A + K ζ ˜ I N B 1 N I n u 0
Now, the DO-based control protocol mechanism can be illustrated by Figure 1.
Remark 7.
Through a series of procedures, a DO-based control protocol mechanism for MASs (2) and (3), which is illustrated clearly by Figure 1, is developed. The DO-based control protocol mechanism is accomplished by several constructions. Firstly, a DO (26) is developed such that each follower can obtain the asymptotic convergence state estimation of the leader’s. Secondly, a local UIO (9) is to ensure that each follower agent reaches an asymptotic convergence in state estimation by eliminating the negative effect of the fault. Thirdly, through a sensor interval observer, an FR method (23) is given. The FR proposed is capable of estimating the actual fault. In addition, it decouples the control input u i . Finally, for each follower agent, a controller (30) is constructed by using the estimated information provided by the DO (26), the local UIO (9) and the FR (23). And because of the DO, the form of the controller (30) looks like a local one rather than a distributed one. In fact, the proposed DO-based control protocol consists of two parts. One is the combination of observer constructions (UIO (9) together with FR (23), and DO (26)). The other is the feedback controller (30). It should be emphasized that the proposed control protocol is a distributed one and the distributed feature has been moved on DO, resulting in the controller being in centralized form. Next, we will prove that the DO-based consensus mechanism is able to fulfill the task of asymptotic convergence MAS consensus.
Denote ψ = χ T x 0 T ζ ˜ T x ˜ 0 T T ; then by (3), (5), (11), (28) and (33), we obtain
ψ ˙ = Ξ ψ + B ¯ u 0
where
Ξ = I N ( A + B K ) I N B K I N B C B C A + K 0 0 I N A + M H 0 D 1 N M 0 0 I N A L C 0 0 0 0 A L C
B ¯ = 1 B 1 B 0 0
Before we present the main result, we further define
Π = Π 11 Π 12 Π 13 0 Π 15 Π 22 0 Π 24 Π 25 Π 33 0 0 Π 44 0 Π 55
where
Π 11 = I N P 1 A + A T P 1 + 2 P 1 B K + I n Π 12 = I N P 1 B K Π 13 = I N P 1 B ( C B ) C A + P 1 B K Π 15 = 1 N P 1 B K 0 Π 22 = I N P 2 A + A T P 2 + I n + H Y + H T Y T Π 24 = D 1 N Y Π 25 = 1 N P 2 B Π 33 = I N P 3 A + A T P 3 U C C T U T Π 44 = P 4 A + A T P 4 U C C T U T + I n Π 55 = γ 2 I p
Theorem 1.
Under Assumptions 1–4 and given that Π 0 has solutions for P 1 0 , P 2 0 , P 3 0 , P 4 0 , Y, U, U and min γ, the leader–following consensus control of MAS (2) and (3) can be fulfilled under the DO-based control protocol (30) in a sense of H-infinity stability, if we set M = P 2 1 Y , L = P 3 1 U and L = P 4 1 U .
Proof of Theorem 1.
Consider Lyapunov function V = ψ T P ψ , where P = diag ( P 1 , P 2 , P 3 , P 4 ) . Then, the derivative of V along the trajectories of (34) is
V ˙ + ψ T ψ γ 2 u 0 T u 0 = ψ T u 0 T Π ψ u 0
where Π is defined by (35). Now, by (36), we have ψ γ u 0 with the minimal γ , which means that dynamic system (34) is H-infinity stable satisfying the performance index of norm, and this completes the proof of Theorem 1.    □
In order to calculate the parameters which are required in the designs, we formulate the following Riccati equations:
I N ( A T P 1 + P 1 A 2 P 1 B B T P 1 ) = I N Q 1
Subsequently, to determine the state feedback gain matrix K in (29), K 0 in (7), L in (4), L in (9), and M in (26), we offer the following Algorithm 1.
Algorithm 1 Calculation Process of Gain Matrix
Input: System matrices A, B, C; degree matrices D ; Generalized Laplacian matrix H .
Output: DO gain matrix M; observer gain matrices L, L ; and state feedback gain matrices K, K 0 .
 1:
Solve algebraic Riccati equations (AREs) (37) to obtain P 1 ; then compute K = B T P 1 .
 2:
Solve LMI Π 0 to obtain P 2 , P 3 , P 4 , Y, U, U and γ ; then, set K 0 = B T P 2 , M = Y P 2 1 , L = U P 4 1 and L = U P 3 1 .
 3:
ReturnM, L, L , K, K 0 .

5. Simulation Results

In this section, two real systems are taken as simulation examples to verify the effectiveness of the DO-based control protocol.
Example 1.
Consider a multi-agent system comprising a single leader and six followers, where each node is modeled as a dual-mass-spring-damper mechanism. The Communication topology is displayed in Figure 2, where the leader state can be only accessed by follower agent 1, and the communication graph for followers is a undirected one. Based on Figure 2, the Laplacian matrix is
L = 4 1 1 1 0 0 1 2 1 0 0 0 1 1 2 0 0 0 1 0 0 3 1 1 0 0 0 1 2 1 0 0 0 1 1 2
Each agent comprises two mass blocks and two springs as shown in Figure 3. The system state is defined as x i = p i 1 v i 1 p i 2 v i 2 T , where p i 1 and v i 1 denote the displacement and velocity of mass 1, and p i 2 and v i 2 indicate the displacement and velocity of mass 2. This system is controlled by force F acting on m 2 , which is the state input value u i in (38). Then, the dynamic takes the following form:
x ˙ i = 0 1 0 0 k 1 + k 2 m 1 c 1 + c 2 m 1 k 2 m 1 c 2 m 1 0 0 0 1 k 2 m 2 c 2 m 2 k 2 m 2 c 2 m 2 x i + 0 0 0 1 m 2 ( u i + ω i ) y i = 1 1 0 0 0 0 1 0 0 0 0 1 x i
where  m 1 = 10.0 kg, m 2 = 5.0 kg; k 1 = 20.0 N/m and k 2 = 15.0 N/m indicate the spring stiffness of two springs; c 1 = 12.0 N·s/m and c 2 = 8.0 N·s/m are the damping coefficients. It is clear that the system (38) comes from the form of (2). To construct MUI, we assume that η i = 0.5 · 1 1 × 6 , t 15 1 1 × 6 , t > 15 , f a , i = 2 · 1 1 × 6 , t 15 2 sin ( 3 t ) · 1 1 × 6 , t > 15 , and ω i ( t ) R 2 indicate the fault in system (2):
ω 1 ( t ) = 1.2 sin ( 0.8 t ) + 0.6 sin ( 4 t ) ω 2 ( t ) = ( 1.0 + 0.5 sin ( 0.2 t ) ) sin ( 2 t ) ω 3 ( t ) = 1.8 sin ( t + 0.8 sin ( 0.5 t ) ) ω 4 ( t ) = 1.5 sin ( t ) , t 15 1.5 sin ( t ) + 0.4 sin ( 5 ( t 15 ) ) , t > 15 ω 5 ( t ) = ( 1.0 + 0.8 tanh ( 3 sin ( 0.2 t ) ) ) sin ( 2 t ) ω 6 ( t ) = 0.6 sin ( t ) + 0.6 sin ( 2 t ) + 0.6 sin ( 5 t )
By solving the ARE (37) with Q 1 = 2 I N , the positive definite matrix and the correlated state feedback gain matrices are obtained as follows:
P 1 = 33.7621 7.5022 3.3705 5.6305 7.5022 15.3939 1.5684 9.4105 3.3705 1.5684 28.1465 4.0103 5.6305 9.4105 4.0103 11.7391 K = 1.1261 1.8821 0.8021 2.3478
Next, we solve LMI Π 0 to obtain P 2 , P 3 , P 4 , U , U and Y as:
P 2 = 1541.6 353.8 841.7 14.8 353.8 206.6 210.4 52.3 841.7 210.4 484.9 17.2 14.8 52.3 17.2 40.8 K 0 = 2.964 10.461 3.447 8.163
P 3 = 2903.4 325.7 50.8 1553.4 325.7 2218.0 955.6 194.4 50.8 955.6 1944.8 1910.7 1553.4 194.4 1910.7 4041.4 U = 6024 9206 9691 1257 2357 7320 4792 14 , 033 4883 1087 4883 2555 P 4 = 3733.2 93.2 23.2 698.9 93.2 1850.4 3.5 351.8 23.2 3.5 2547.3 3.1 698.9 351.8 3.1 2734.4 U = 831.6 1061.9 4243.3 1555.1 511.1 91.6 1273.6 1725.7 2496.7 4156.7 2496.7 2306.6
Y = 0.1000 0.1000 0.1000 0.4371 0.1000 0.1000 0.1000 0.1000 0.1000 0.1000 0.1000 0.1000 0.6829 0.1000 0.1000 0.1000
Moreover, the minimum parameter value γ = 2.6187 is obtained through Algorithm 1; we have
L = 0.1287 0.1006 1.0207 1.1719 0.4494 0.0123 0.5049 0.6766 0.9716 1.7044 0.9444 0.5853 L = 24.4191 52.5562 10.6957 27.7509 62.1136 11.9467 50.4286 113.3521 14.2307 34.2928 75.5695 10.7813 M = 0.0007 0.0029 0.0029 0.0075 0.0139 0.0036 0.0036 0.0032 0.0038 0.0066 0.0066 0.0144 0.0326 0.0053 0.0053 0.0032
For the 2-DOF mass-spring-damper system, the leader’s initial state is set as x 0 ( 0 ) = [ 2 0 2 0 ] T , and the followers’ initial states are x 1 ( 0 ) = [ 2 0.4 1.6 0.0 ] T , x 2 ( 0 ) = [ 2.4 0.0 2 0.0 ] T , x 3 ( 0 ) = [ 2 0.0 2.4 0.0 ] T , x 4 ( 0 ) = [ 2.4 0.0 4 0.0 ] T , x 5 ( 0 ) = [ 2.8 0.0 2.4 0.0 ] T and x 6 ( 0 ) = [ 2 0.0 1.6 0.0 ] T .
To verify the effectiveness of the proposed method, the detailed simulation results are presented as follows. Figure 4 and Figure 5 show the performance of FR, d i and its reconstruction d ^ i , in which we set u ̲ i = [ 2 2 2 2 2 2 ] T , u ¯ i = [ 2 2 2 2 2 2 ] T , f ̲ a , i = [ 2 2 2 2 2 2 ] T , f ¯ a , i = [ 2 2 2 2 2 2 ] T , η ̲ i = [ 0 0 0 0 0 0 ] T , η ¯ i = [ 1 1 1 1 1 1 ] T , ω ̲ i = [ 2 2 2 2 2 2 ] T and ω ¯ i = [ 2 2 2 2 2 2 ] T . And then, we obtain d ̲ i = [ 6 6 6 6 6 6 ] T and d ¯ i = [ 8 8 8 8 8 ] T . For the FR, ρ 1 = 10.5 and ρ 2 = 8.7 in (24), Q = 10.172 0 0 0 10.5915 0 0 0 10 in (13). For the DO-based control protocol, the force F, which serves as the control input u i , is displayed in Figure 6 and Figure 7. Now, we have the state estimation and FR of d i for the ith follower, and the leader’s state estimation produced in the ith follower by DO (26) in Figure 8. And under the DO-based fault-tolerant control protocol, the MAS consensus is fulfilled asymptotically for the 2-DOF mass-spring-damper system, where the displacement and velocity for masses m 1 and m 2 can be clearly seen from Figure 9.
Example 2.
Consider a time-varying switching topology scenario described in Figure 10, and the form as follows:
A = 0 1.0 2.5 1.0 , B = 0 0.25 , C = 1 0 0 1
Assuming that Agent 2 receives information from Agent 1 within the periodic time, when Agent 2 cannot obtain the state of Agent 1, the topological frame is G 1 as shown in Figure 10; otherwise, the topological frame is G 2 . To simulate the dynamic disconnection and reconnection of these communication links, a piecewise constant switching signal σ ( t ) : [ 0 , 30 ) { G 1 , G 2 } is introduced. The network topology switches periodically every 5 s, which is formulated as:
σ ( t ) = G 1 , t [ 10 k , 10 k + 5 ) G 2 , t [ 10 k + 5 , 10 ( k + 1 ) ) , k N
Under this switching mechanism, the DO design for the ith follower (26) is naturally extended by incorporating the time-varying adjacency weight a i j σ ( t ) :
x ˙ 0 , i = A x 0 , i + M j = 1 N a i j σ ( t ) x 0 , i x 0 , j + a i 0 x 0 , i x ^ 0
For different physical models, we define the same η i and f a , i by a DO-based fault-tolerant control protocol. To solve ARE (37) and LMI Π 0 , the positive definite matrix and the correlated state feedback gain matrices are obtained as follows:
P 1 = 29.1241 1.9089 1.9089 4.6203 P 2 = 11.0282 0.1897 0.1897 2.5146 P 3 = 995.9838 958.5875 958.5875 999.9381 K = 0.9545 2.3101 K 0 = 0.0474 0.6286 U = 1872.0 9355.0 9355.0 4823.0 P 4 = 999.9493 999.9096 999.9096 999.9828 Y = 0.1 0.1856 0.1 0.1 L = 17 , 748 17 , 770 17 , 744 17 , 768 U = 4972.4 2968.6 2968.6 962.3 M = 0.0084 0.0162 0.0391 0.0385 L = 126.5875 61.4250 111.9966 54.0617
The minimum parameter value γ = 2.0578 as obtained by Algorithm 1, the leader’s initial state is set for x 0 ( 0 ) = [ 2 0 ] T , and the followers’ initial states are x 1 ( 0 ) = [ 2 0.4 ] T , x 2 ( 0 ) = [ 2.4 0.0 ] T , x 3 ( 0 ) = [ 2 0.0 ] T , x 4 ( 0 ) = [ 2.4 0.0 ] T , x 5 ( 0 ) = [ 2.8 0.0 ] T and x 6 ( 0 ) = [ 2 0.0 ] T .
For the system with directed topology and sparse connections, the states of followers converge to the leader’s state asymptotically as shown in Figure 11. For FR, Q = 160.2 0 0 150.6 in (13); multiple unknown input d i and its reconstruction d ^ i are shown in Figure 12 and Figure 13. Clearly, in the case of time-varying topology transformation, the DO-based fault-tolerant control protocol can achieve consensus control successfully.

6. Conclusions

In this paper, a DO-based fault-tolerant control protocol scheme is developed for MAS consensus purposes, where followers can derive the estimated state of the leader based on the introduction of the DO. In fact, the proposed DO is able to estimate the leader’s state asymptotically. Meanwhile, the followers suffer from actuator faults. Thus, a local UIO is designed for each follower to asymptotically estimate the state by decoupling the fault. Moreover, an FR that can estimate the actual fault asymptotically is proposed via an interval observer. The FR is able to decouple the control input of the current follower. Finally, for the ith follower agent, using the estimation of the leader’s state provided by the DO (26), the state of the ith follower provided by the UIO (9) and the FR (23), a fault-tolerant controller (30) is constructed. Accordingly, a DO-based fault-tolerant control protocol is developed. The DO-based fault-tolerant control protocol is a distributed one, and the distributed feature is majorly reflected by the DO, while the controller is in a centralized form. Further work should focus on DO-based fault-tolerant control protocol construction for heterogeneous and complex formation control tasks.

Author Contributions

Methodology, T.L.; software, F.Z.; validation, T.L. and H.X.; formal analysis, T.L., H.X. and F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (grant number 61973236).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LMILinear Matrix Inequality
UIOUnknown Input Observer
DIODistributed Interval Observer
FRFault Reconstruction

Appendix A. Proof of Lemma 1

According to the condition in Assumption 1, we have
n + m = rank s I n A B C 0 = rank s I n A B C 0 I n 0 ( C B ) C A I m = rank s I n A + B ( C B ) C A B C 0 = rank s I n A C + m
which gives rank s I n A C = n . Therefore, the matrix pair ( A , C ) is detectable.

Appendix B. Proof of Lemma 4

For Equation (14), ε ¯ i 0 and ε ̲ i 0 hold; the error system (14) needs to be a positive system. By Lemma 2, we have
C B + d ¯ i C B d ̲ i C B d i 0
C B d i C i B + d ̲ i + C B d ¯ i 0
holding for all t 0 . Therefore, there exists τ 0 > 0 , such that
C A x ˜ i + C B + d ¯ i C i B d ̲ i C B d i 0
C A x ˜ i + C B d i C B + d ̲ i + C B d ¯ i 0
hold while t τ 0 . By Lemma 2, ε ¯ i 0 0 and ε ̲ i 0 0 . Then, by Lemma 3, Q is a Metzler matrix; we conclude that Equation (14) becomes a positive system, and y ̲ i t y i t y ¯ i t holds for all t τ 0 .

References

  1. Olfati-Saber, R. Distributed Kalman filtering for sensor networks. In Proceedings of the 2007 46th IEEE Conference on Decision and Control; IEEE: New York, NY, USA, 2007; pp. 5492–5498. [Google Scholar]
  2. Khan, M.W.; Wang, J.; Ma, M.; Xiong, L.; Li, P.; Wu, F. Optimal energy management and control aspects of distributed microgrid using multi-agent systems. Sustain. Cities Soc. 2019, 44, 855–870. [Google Scholar] [CrossRef]
  3. Gao, L.; Lu, H.; Wang, J.; Yu, X.Y.; Jiang, P.; Chen, F.; Li, H. Robust distributed average tracking with disturbance observer control. IEEE Trans. Autom. Sci. Eng. 2024, 22, 970–983. [Google Scholar] [CrossRef]
  4. Yu, Z.; Zhang, Y.; Jiang, B.; Fu, J.; Jin, Y.; Chai, T. Composite adaptive disturbance observer-based decentralized fractional-order fault-tolerant control of networked UAVs. IEEE Trans. Syst. Man Cybern. Syst. 2020, 52, 799–813. [Google Scholar] [CrossRef]
  5. Barzegar, A.; Rahimi, A. Fault diagnosis and prognosis for satellite formation flying: A survey. IEEE Access 2022, 10, 26426–26442. [Google Scholar] [CrossRef]
  6. Wang, J.; Gao, M.; Zhai, W.; Rida, I.; Zhu, X.; Li, Q. Knowledge generation and distillation for road segmentation in intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 2025, 1–13. [Google Scholar] [CrossRef]
  7. Davoodi, M.; Meskin, N.; Khorasani, K. Simultaneous fault detection and consensus control design for a network of multi-agent systems. Automatica 2016, 66, 185–194. [Google Scholar] [CrossRef]
  8. Yang, H.; Zhang, H.; Wang, Z.; Yan, H. Reliable leader-following consensus of discrete-time semi-Markovian jump multi-agent systems. IEEE Trans. Netw. Sci. Eng. 2023, 10, 3505–3518. [Google Scholar] [CrossRef]
  9. Wang, X.; Yang, D.; Chen, S. Particle swarm optimization based leader-follower cooperative control in multi-agent systems. Appl. Soft Comput. 2024, 151, 111130. [Google Scholar] [CrossRef]
  10. Fan, T.; Wan, Q.; Ding, Z. Topology switching for optimal leader-following consensus of multi-agent systems. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 3845–3849. [Google Scholar] [CrossRef]
  11. Li, M.; Long, Y.; Zhang, K.; Li, T. Time-varying dynamic event-triggered-based prescribed-time leader-following consensus for multi-agent systems with input saturation. Appl. Math. Comput. 2026, 508, 129630. [Google Scholar] [CrossRef]
  12. Maffettone, G.C.; Boldini, A.; Porfiri, M.; di Bernardo, M. Leader-follower density control of spatial dynamics in large-scale multiagent systems. IEEE Trans. Autom. Control 2025, 70, 6783–6798. [Google Scholar] [CrossRef]
  13. Xue, Z.; Zhou, J.; Wu, L.; Su, H. Leader-following consensus of high-order multiagent systems under intermittent event-triggered control and intermittent self-triggered control. Int. J. Syst. Sci. 2026, 1–17. [Google Scholar] [CrossRef]
  14. Zhang, W.; Huang, Q.; Alhudhaif, A. Event-triggered fixed-time bipartite consensus for nonlinear disturbed multi-agent systems with leader-follower and leaderless controller. Inf. Sci. 2024, 662, 120243. [Google Scholar] [CrossRef]
  15. Chen, C.; Zheng, Z.; Lewis, F.L.; Xie, K.; Xie, S. Solo Adaptation for Distributed Synchronization with Application to Event-Triggered Linear Multiagent Systems. IEEE Trans. Autom. Control 2025, 70, 4225–4232. [Google Scholar] [CrossRef]
  16. He, C.; Huang, J. Adaptive distributed observer for general linear leader systems over periodic switching digraphs. Automatica 2022, 137, 110021. [Google Scholar] [CrossRef]
  17. Wu, X.; Ding, S.; Zhao, N.; Wang, H.; Niu, B. Neural-network-based event-triggered adaptive secure fault-tolerant containment control for nonlinear multi-agent systems under denial-of-service attacks. Neural Netw. 2025, 191, 107725. [Google Scholar] [CrossRef]
  18. Cao, G.; Wang, J. Distributed unknown input observer. IEEE Trans. Autom. Control 2023, 68, 8244–8251. [Google Scholar] [CrossRef]
  19. Yang, Y.; He, Y. Time-varying formation-containment control for heterogeneous multiagent systems with communication and output delays via observer-based feedback protocol. IEEE Trans. Autom. Sci. Eng. 2025, 22, 15435–15448. [Google Scholar] [CrossRef]
  20. Hong, Y.; Chen, G.; Bushnell, L. Distributed observers design for leader-following control of multiagent networks. Automatica 2008, 44, 846–850. [Google Scholar] [CrossRef]
  21. Parviz Valadbeigi, A.; Soltanian, F.; Shasadeghi, M. Robust H∞ Output Consensus in Heterogeneous Multiagent Discrete-Time Systems Using Q-Learning Algorithm. Iran. J. Sci. Technol. Trans. Electr. Eng. 2025, 49, 251–263. [Google Scholar] [CrossRef]
  22. Wang, S.; Meng, X.; Zhang, H.; Lewis, F.L. Learning nonlinear dynamics in synchronization of knowledge-based leader-following networks. Automatica 2024, 166, 111695. [Google Scholar] [CrossRef]
  23. Bi, C.; Xu, X.; Liu, L.; Feng, G. Consensus of multiagent systems under unbounded communication delays via adaptive distributed observers. IEEE Trans. Control Netw. Syst. 2025, 12, 1613–1625. [Google Scholar] [CrossRef]
  24. Zamani, N.; Kamali, M.; Askari-Marnani, J.; Kalantari, H.; Aghdam, A.G. Fault-tolerant leader-follower controller for uncertain nonlinear multiagent systems. IEEE Trans. Autom. Sci. Eng. 2024, 22, 9376–9387. [Google Scholar] [CrossRef]
  25. Cai, Y.; Zhang, H.; Li, W.; Mu, Y.; He, Q. Distributed bipartite adaptive event-triggered fault-tolerant consensus tracking for linear multiagent systems under actuator faults. IEEE Trans. Cybern. 2021, 52, 11313–11324. [Google Scholar] [CrossRef] [PubMed]
  26. Ma, L.; Zhu, F. Human-in-the-loop formation control for multi-agent systems with asynchronous edge-based event-triggered communications. Automatica 2024, 167, 111744. [Google Scholar] [CrossRef]
  27. Zhu, F.; Li, M. Distributed interval observer and distributed unknown input observer designs. IEEE Trans. Autom. Control 2024, 69, 8868–8875. [Google Scholar] [CrossRef]
  28. Ma, L.; Zhu, F.; Zhao, X. Human-in-the-loop formation-containment control for multiagent systems: An observer-based distributed unknown input reconstruction method. IEEE Trans. Control Netw. Syst. 2023, 10, 2178–2189. [Google Scholar] [CrossRef]
  29. Ling, X.; Xu, H.; Zhu, F. Active Compensation Fault-Tolerant Control for Uncertain Systems with Both Actuator and Sensor Faults. Sensors 2026, 26, 267. [Google Scholar] [CrossRef]
  30. Lin, G.; Li, H.; Ma, H.; Zhou, Q. Distributed containment control for human-in-the-loop MASs with unknown time-varying parameters. IEEE Trans. Circuits Syst. I Regul. Pap. 2022, 69, 5300–5311. [Google Scholar] [CrossRef]
  31. Guo, X.; Wang, C.; Liu, L. Adaptive fault-tolerant control for a class of nonlinear multi-agent systems with multiple unknown time-varying control directions. Automatica 2024, 167, 111802. [Google Scholar] [CrossRef]
  32. Busacca, F.; Galluccio, L.; Palazzo, S.; Panebianco, A.; Raftopoulos, R. Bandits Under the Waves: A Fully-Distributed Multi-Armed Bandit Framework for Modulation Adaptation in the Internet of Underwater Things. IEEE Trans. Netw. Serv. Manag. 2025, 23, 639–653. [Google Scholar] [CrossRef]
  33. Zhang, Z.; Huang, B.; Zhou, X.; Peng, H. Fully distributed dynamic event-triggered formation-containment control for networked unmanned surface vehicles with intermittent wireless network communications. ISA Trans. 2025, 156, 202–216. [Google Scholar] [CrossRef] [PubMed]
  34. Hou, Z.; Lv, Z.; Wu, Y.; Zhou, H. Finite-Time Fault-Tolerant Control for a Coaxial Tilt-Rotor eVTOL Under a Rotor Fault. IEEE Trans. Ind. Electron. 2026, 1–13. [Google Scholar] [CrossRef]
  35. Gao, Z.; Ni, S.; Guo, G.; Xu, Y.; Li, H.; Mumtaz, S.; Politis, I. Multi-Lane Merging Control for Vehicular Platoons in 2-D Planar with Prescribed Performance. IEEE Trans. Veh. Technol. 2025, 1–14. [Google Scholar] [CrossRef]
  36. Rojas, A.; Sun, L.; Zhao, X.; Liu, Y. Quasi-symplectic ADRC hybridization to stabilize a flexible truss for a space telescope. Aerosp. Sci. Technol. 2026, 176, 112085. [Google Scholar] [CrossRef]
  37. Corless, M.; Tu, J. State and input estimation for a class of uncertain systems. Automatica 1998, 34, 757–764. [Google Scholar] [CrossRef]
  38. Zhu, F.; Fu, Y.; Dinh, T.N. Asymptotic convergence unknown input observer design via interval observer. Automatica 2023, 147, 110744. [Google Scholar] [CrossRef]
  39. Mazenc, F.; Dinh, T.N.; Niculescu, S.I. Interval observers for discrete-time systems. Int. J. Robust Nonlinear Control 2014, 24, 2867–2890. [Google Scholar] [CrossRef]
  40. Levant, A. Higher-order sliding modes, differentiation and output-feedback control. Int. J. Control 2003, 76, 924–941. [Google Scholar] [CrossRef]
  41. Wang, S.; Huang, J. Adaptive leader-following consensus for multiple Euler–Lagrange systems with an uncertain leader system. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2188–2196. [Google Scholar] [CrossRef]
  42. Zhang, X.; Hengster-Movrić, K.; Šebek, M.; Desmet, W.; Faria, C. Distributed observer and controller design for spatially interconnected systems. IEEE Trans. Control Syst. Technol. 2017, 27, 1–13. [Google Scholar] [CrossRef]
  43. Lu, M. A sensory feedback based discrete distributed observer to cooperative output regulation. IEEE Trans. Autom. Control 2022, 67, 4762–4769. [Google Scholar] [CrossRef]
  44. Liu, T.; Wang, S.; Huang, J. An adaptive distributed observer for a class of uncertain linear leader systems over jointly connected switching networks and its application. IEEE Trans. Autom. Control 2024, 69, 7340–7355. [Google Scholar] [CrossRef]
Figure 1. DO-based fault-tolerant control protocol mechanism.
Figure 1. DO-based fault-tolerant control protocol mechanism.
Sensors 26 03153 g001
Figure 2. Communication topology.
Figure 2. Communication topology.
Sensors 26 03153 g002
Figure 3. Two-degree-of-freedom (2-DOF) mass-spring-damper system.
Figure 3. Two-degree-of-freedom (2-DOF) mass-spring-damper system.
Sensors 26 03153 g003
Figure 4. d i and its reconstruction d ^ i ( i = 1 , 2 , 3 ) .
Figure 4. d i and its reconstruction d ^ i ( i = 1 , 2 , 3 ) .
Sensors 26 03153 g004
Figure 5. d i and its reconstruction d ^ i ( i = 4 , 5 , 6 ) .
Figure 5. d i and its reconstruction d ^ i ( i = 4 , 5 , 6 ) .
Sensors 26 03153 g005
Figure 6. Control input u i ( i = 1 , 2 , 3 ) .
Figure 6. Control input u i ( i = 1 , 2 , 3 ) .
Sensors 26 03153 g006
Figure 7. Control input u i ( i = 4 , 5 , 6 ) .
Figure 7. Control input u i ( i = 4 , 5 , 6 ) .
Sensors 26 03153 g007
Figure 8. DO estimation for each agent.
Figure 8. DO estimation for each agent.
Sensors 26 03153 g008
Figure 9. State trajectory for each agent.
Figure 9. State trajectory for each agent.
Sensors 26 03153 g009
Figure 10. Time-varying switching topology.
Figure 10. Time-varying switching topology.
Sensors 26 03153 g010
Figure 11. State trajectory for each agent.
Figure 11. State trajectory for each agent.
Sensors 26 03153 g011
Figure 12. d i and its reconstruction d ^ i ( i = 1 , 2 , 3 ) .
Figure 12. d i and its reconstruction d ^ i ( i = 1 , 2 , 3 ) .
Sensors 26 03153 g012
Figure 13. d i and its reconstruction d ^ i ( i = 4 , 5 , 6 ) .
Figure 13. d i and its reconstruction d ^ i ( i = 4 , 5 , 6 ) .
Sensors 26 03153 g013
Table 2. Common fault types.
Table 2. Common fault types.
Fault Mode η ̲ η ¯ f a , i
Normal11 0 m
Outage00 0 m
Bias11 0 m
Stuck00 0 m
Loss performance>0<1 0 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, T.; Zhu, F.; Xu, H. Leader–Following Fault-Tolerant Consensus Control for Multi-Agent Systems Based on Observers. Sensors 2026, 26, 3153. https://doi.org/10.3390/s26103153

AMA Style

Liu T, Zhu F, Xu H. Leader–Following Fault-Tolerant Consensus Control for Multi-Agent Systems Based on Observers. Sensors. 2026; 26(10):3153. https://doi.org/10.3390/s26103153

Chicago/Turabian Style

Liu, Tengzi, Fanglai Zhu, and Haichuan Xu. 2026. "Leader–Following Fault-Tolerant Consensus Control for Multi-Agent Systems Based on Observers" Sensors 26, no. 10: 3153. https://doi.org/10.3390/s26103153

APA Style

Liu, T., Zhu, F., & Xu, H. (2026). Leader–Following Fault-Tolerant Consensus Control for Multi-Agent Systems Based on Observers. Sensors, 26(10), 3153. https://doi.org/10.3390/s26103153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop