You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

7 November 2025

Data-Driven Fully Distributed Fault-Tolerant Consensus Control for Nonlinear Multi-Agent Systems: An Observer-Based Approach

,
,
,
,
,
and
1
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
Heilongjiang Provincial Climate Center, Harbin 150030, China
3
The Third Military Representative Office of the Air Force Armaments Department in Harbin, Harbin 150010, China
4
School of Electronic Information and Electrical Engineering, Chengdu University, Chengdu 610106, China
This article belongs to the Special Issue Intelligent Control and Applications of Nonlinear Dynamic System

Abstract

This paper introduces a novel observer-based, fully distributed fault-tolerant consensus control algorithm for model-free adaptive control, specifically designed to tackle the consensus problem in nonlinear multi-agent systems. The method addresses the issue of followers lacking direct access to the leader’s state by employing a distributed observer that estimates the leader’s state using only local information from the agents. This transforms the consensus control challenge into multiple independent tracking tasks, where each agent can independently follow the leader’s trajectory. Additionally, an extended state observer based on a data-driven model is utilized to estimate unknown actuator faults, with a particular focus on brake faults. Integrated into the model-free adaptive control framework, this observer enables real-time fault detection and compensation. The proposed algorithm is supported by rigorous theoretical analysis, which ensures the boundedness of both the observer and tracking errors. Simulation results further validate the algorithm’s effectiveness, demonstrating its robustness and practical viability in real-time fault-tolerant control applications.

1. Introduction

In recent years, multi-agent systems (MASs) have received significant attention due to their wide-ranging applications in autonomous driving [1,2], unmanned aerial vehicles [3,4], and sensor networks [5,6]. Among various coordination objectives, consensus control plays a central role by enabling agents to reach agreement on key variables through local communication, supporting higher-level tasks such as formation control [7], distributed estimation [8], and cooperative decision-making [9]. To achieve consensus, existing methods are broadly classified into model-based and model-free approaches. While model-based strategies rely on known system dynamics and offer rigorous theoretical guarantees, their applicability is often limited in practical scenarios involving heterogeneous agents, uncertain environments, or time-varying dynamics, where accurate modeling is difficult or infeasible. These limitations have led to growing interest in model-free or data-driven consensus control methods, which utilize real-time input–output data to achieve coordination without requiring explicit knowledge of system models.
Among model-free control frameworks, model-free adaptive control (MFAC) has shown particular effectiveness due to its reliance on online input–output data and pseudo-gradient estimation, which avoids explicit model identification while ensuring low complexity and interpretability [10,11,12,13,14,15]. Other data-driven approaches, such as reinforcement learning [16,17], neural networks [18], and fuzzy-based control [19], have also been explored, yet they often require extensive training or large datasets, limiting their real-time applicability. In contrast, MFAC achieves faster convergence with lower computational cost, making it particularly well-suited for real-time control of uncertain nonlinear systems. Recently, MFAC has been extended to the multi-agent domain [20,21,22,23,24], where distributed MFAC-based controllers have been proposed to solve consensus problems under model uncertainty using only local data. In [20], a MFAC framework was developed to address the consensus control problem of multi-agent systems (MASs) subject to deception attacks. The asymmetric bipartite consensus tracking problem was studied in [22], where event-triggered mechanisms were incorporated to improve communication efficiency. In addition, ref. [23] investigated distributed control under denial-of-service (DoS) attacks and external disturbances. While these schemes have shown promising performance, they also introduce several intrinsic challenges. Notably, many existing designs rely on consensus error signals, which often involve future outputs or states of neighboring agents—quantities that are not directly measurable in real time. This dependence increases implementation complexity and can introduce estimation errors. In addition, the use of consensus errors typically leads to parameter coupling among agents, thereby complicating controller tuning and reducing scalability in large or heterogeneous networks.
A further limitation of existing MFAC-based consensus methods is the assumption of nominal actuator functionality. In practice, actuator faults—such as bias, partial loss of effectiveness, or complete degradation—frequently occur due to hardware aging, mechanical wear, or harsh environmental conditions. These faults can significantly distort the applied control signal, degrade tracking performance, or even prevent consensus. As far as we known, only a relatively small number of mfac-based multi-intelligent body consistency studies have considered errors that occur on systems [25,26,27]. In [25,26], the output saturation problem was addressed in the tracking control of multi-agent systems (MASs). Actuator faults were further investigated in [27], where a distributed control strategy was developed to ensure system stability under fault conditions. Moreover, since many existing MFAC consensus schemes rely on consensus error terms, a fault in one agent not only affects its own behavior but also propagates through the network via local interactions. This inter-agent coupling can amplify fault effects and deteriorate global control performance, particularly in large-scale or tightly interconnected systems. These challenges underscore the importance of developing robust, fault-tolerant consensus strategies for multi-agent systems under model uncertainty.
Motivated by the above challenges, this paper develops a fully distributed fault-tolerant consensus control framework for nonlinear multi-agent systems is paper, without assuming prior knowledge of agent models. The proposed approach integrates MFAC with real-time observation and compensation mechanisms. The main contributions of the paper are as follows:
  • A fully distributed and purely data-driven consensus control framework is proposed for nonlinear multi-agent systems subject to actuator faults and unknown dynamics. The framework is implemented using only local real-time input–output data and neighbor communication, without relying on global information, system models, or offline training. This design ensures high scalability, adaptability, and applicability in large-scale, uncertain, and fault-prone environments, while significantly reducing implementation complexity.
    A distributed data-driven observer is designed to eliminate structural coupling and support independent reference tracking for each agent. Unlike traditional MFAC-based algorithms [20,21,22,23,24], where controller design relies on system-wide consensus errors, the introduction of a distributed data-driven observer removes this dependency. Each agent estimates the leader’s state using only local and neighboring input–output data, allowing independent reference tracking and controller tuning. This structure mitigates the propagation of local faults and enhances the overall robustness of the distributed control system.
  • An extended state observer (ESO) is integrated into the control framework to enable real-time fault estimation and compensation. The ESO reconstructs unknown actuator faults and external disturbances from local input–output measurements and feeds the estimates into the control loop for adaptive correction. This mechanism significantly improves consensus reliability under input degradation, without relying on centralized diagnosis, prior model knowledge, or additional sensing infrastructure.

2. Preliminaries and Problem Formulation

2.1. Graph Theory

A weighted graph G ( V , E , A ) is utilized to characterize the interaction topology among the agents, where V = v 1 , v 2 , , v M denotes the set of nodes, E V × V represents the edge set, and A = [ a i j ] R M × M is the associated adjacency matrix. An edge ( v i , v j ) E implies a i j > 0 , indicating that agent j can receive information from agent i; otherwise, a i j = 0 . The neighbor set of agent i is defined as N i = v j V : ( v i , v j ) E . A directed path from node i to node k exists if a sequence of edges connects them, such as ( v i , v c ) , ( v c , v d ) , , ( v e , v k ) . A directed graph possesses a spanning tree if there exists at least one node (root) that has a directed path to every other node in the graph. The diagonal matrix W = diag ( ω i 0 ) captures the influence of the leader on each follower, where ω i 0 > 0 if follower i can access the leader’s information; otherwise, ω i 0 = 0 .
Assumption 1
([28]). A spanning tree is assumed to exist in the graph G , and the leader is accessible from at least one root node via a directed connection.

2.2. Problem Formulation

Consider a fully heterogeneous nonlinear MAS composed of N followers and a single leader. The dynamics of the ith follower subject to actuator faults are described by:
y i ( ι + 1 ) = f i ( y i ( ι ) , u i ( ι ) , u f i ( ι ) ) ,
where y i ( ι ) R n i denotes the output of the ith follower. u i ( ι ) R l i and u f i ( ι ) R l i are the input, and actuator fault of the ith follower, and f i ( · ) R n i represents the unknown dynamics.
The leader’s dynamics is described by:
x 0 ( ι + 1 ) = A 0 x 0 ( ι ) , y 0 ( ι ) = C 0 x 0 ( ι ) ,
where x 0 ( ι ) R n 0 and y 0 ( ι ) R m 0 denote the leader’s state and the output, respectively. A 0 R n 0 × n 0 and C 0 R m 0 × n 0 denote the internal dynamics and output dynamics of the leader, respectively.
Assumption 2
([29]). The modulus of every eigenvalue of A 0 is less than or equal to one.

3. Observer-Based Data-Driven Fault-Tolerant Control Algorithm Design

3.1. Data-Driven Distributed State Observer

To effectively mitigate the adverse effects of actuator faults in MASs, it would be ideal for each follower agent to directly access the state information of the leader. However, in practical scenarios, such direct communication is often constrained by limitations in network topology, communication bandwidth, or system security requirements. These constraints make it infeasible for all agents to receive global information or maintain continuous access to the leader’s state. Therefore, it becomes essential to design distributed observer mechanisms that rely solely on local real-time information exchanged with neighboring agents. In this work, we propose three distributed observers that enable each agent to estimate the leader’s dynamics and state independently.
A ^ i ( ι + 1 ) = A ^ i ( ι ) + α i j N i a i j ( A ^ j ( ι ) A ^ i ( ι ) ) + α i ω i 0 ( A 0 ( ι ) A ^ i ( ι ) ) ,
C ^ i ( ι + 1 ) = C ^ i ( ι ) + β i j N i a i j ( C ^ j ( ι ) C ^ i ( ι ) ) + β i ω i 0 ( C 0 ( ι ) C ^ i ( ι ) ) ,
χ i ( ι + 1 ) = A ^ i [ χ i ( ι ) + σ i j N i a i j ( χ i ( ι ) χ j ( ι ) ) + σ i ω i 0 ( χ i ( ι ) x 0 ( ι ) ) ] ,
where A ^ i ( ι ) , C ^ i ( ι ) and χ i ( ι ) denote the estimation of the internal dynamics, output dynamics and leader’s state, respectively. α i , β i , and σ i are the parameters to be designed.
Lemma 1
([29]). Consider a fully heterogeneous nonlinear multi-agent system (MAS) with the leader’s dynamics given by (2). Assuming that Assumptions 1 and 2 hold, and each follower implements the distributed observer as defined in (3), the estimates of the system matrices, A ^ i , C ^ i , and state χ i for each agent i, will converge asymptotically to the leader’s system matrices A 0 , C 0 , and state x 0 , respectively, as ι . This convergence holds provided that the observer gains satisfy the following conditions:
0 < α < 2 ρ ( H ) , 0 < β < 2 ρ ( H ) , 0 < σ < 2 ρ ( H ) ,
where ρ ( H ) represents the spectral radius of the matrix H associated with the graph topology.
Remark 1.
The introduction of the distributed observer decouples the overall multi-agent system, eliminating the reliance on consensus-error-based design inherent in traditional MFAC methods. This decoupling not only improves the scalability and flexibility of the control architecture but also prevents fault propagation among agents, thereby enhancing the overall robustness of the distributed control system.

3.2. Data-Driven Fault-Tolerant Control Algorithm

3.2.1. Data Model Construction

Assumption 3
([30,31]). The dynamics of the ith follower in the MAS is assumed to satisfy the following generalized Lipschitz condition.
Δ y i ( ι + 1 ) ς i | u r i ( ι + 1 ) u r i ( ι ) | , | u r i ( ι + 1 ) u r i ( ι ) | 0
where Δ y i ( ι + 1 ) = y i ( ι + 1 ) y i ( ι ) , u r i ( ι ) = u i ( ι ) + u f i ( ι ) , and ς i > 0 is a constant.
Lemma 2
([32]). Given Assumption 3 and the condition | u r i ( ι + 1 ) u r i ( ι ) | 0 , the dynamics of the ith follower subject to actuator faults can be equivalently expressed by the following data-based model.
Δ y i ( ι + 1 ) = ϱ i ( ι ) Δ u i ( ι ) + ϱ i ( ι ) Δ u f i ( ι )
where ϱ i ( ι ) is the pseudo-partial-derivative (PPD), Δ u i ( ι ) = u i ( ι ) u i ( ι 1 ) , Δ u f i ( ι ) = u f i ( ι ) u f i ( ι 1 ) , | Δ u f i ( ι ) | < u f m , | ϱ i ( ι ) | ς i .

3.2.2. Extended State Observer

Define Z i ( ι ) = [ z i 1 ( ι ) , z i 2 ( ι ) ] T = [ y i ( ι ) , Δ u f i ( ι ) ] T . Combined with (4), we have
z ^ i 1 ( ι ) = z ^ i 1 ( ι 1 ) + ϱ i ( ι 1 ) Δ u i ( ι 1 ) + ϱ i ( ι 1 ) z ^ i 2 ( ι 1 ) + κ i 1 ( z i 1 ( ι 1 ) z ^ i 1 ( ι 1 ) ) z ^ i 2 ( ι ) = z ^ i 2 ( ι 1 ) + κ i 2 ( z i 1 ( ι 1 ) z ^ i 1 ( ι 1 ) )
where z ^ i 1 ( ι ) and z ^ i 2 ( ι ) are the estimation of z i 1 ( ι ) and z i 2 ( ι ) , respectively. κ 1 and κ 2 are observer gains.
Remark 2.
The ESO is employed to estimate controller errors in real time, providing essential information for the subsequent controller design. Since the estimation is continuously updated, once the controller error vanishes, the ESO output converges to the true value. This property ensures the correctness of the controller design and contributes to maintaining system stability.

3.2.3. Data-Driven Fault-Tolerant Controller Design

Select the cost function J i ( u i ( ι ) ) of the ith follower as
J i ( u i ( ι ) ) = χ ^ i ( ι + 1 ) y i ( ι + 1 ) 2 + η i | Δ u i ( ι ) | 2
where η i is the weighting parameter to limit the change in the input of the ith follower, χ ^ i ( ι + 1 ) = C ^ i ( ι + 1 ) χ i ( ι + 1 ) .
Substituting (4) and (5) into (6), and then minimizing (6) with respect to Δ u i ( ι ) , we can obtain
Δ u i ( ι ) = ϱ i ( ι ) ρ i ( χ ^ i ( ι + 1 ) y ^ i ( ι ) ) θ i ϱ i ( ι ) Δ z ^ i 2 ( ι ) η i + ϱ i 2 ( ι )
where Δ u ^ f i ( ι ) is the estimation of Δ u f i ( ι ) ; ς i ( 0 , 1 ] and η i ( 0 , 1 ] are step factors for flexibility.
Consider the cost function J i ( ϱ i ) with respect to ϱ i as
J i ( ϱ i ) = | Δ y i ( ι ) ϱ i ( ι ) Δ u r i ( ι 1 ) | 2 + ϑ i | ϱ i ( ι ) ϱ ^ i ( ι 1 ) | 2
where ϑ i is a positive weighting parameter.
Then, the estimation of PPD ϱ ^ i ( ι ) can be obtained by minimizing (8), so that
ϱ ^ i ( ι ) = ϱ ^ i ( ι 1 ) + ϖ i ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) ϑ i + ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) 2 [ Δ y i ( ι ) ϱ ^ i ( ι 1 ) ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) ]
where ϖ i ( 0 , 1 ] is the step size.
Combining (7) and (9), the model-free adaptive security controller for the ith follower is shown as
ϱ ^ i ( ι ) = ϱ ^ i ( ι 1 ) + ϖ i ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) ϑ i + ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) 2 [ Δ y i ( ι ) ϱ ^ i ( ι 1 ) ( Δ u i ( ι 1 ) + z ^ i 2 ( ι 1 ) ) ]
ϱ ^ i ( ι ) = ϱ ^ i ( 1 ) , i f sign ( ϱ ^ i ( ι ) ) sign ( ϱ ^ i ( 1 ) ) o r | ϱ ^ i ( ι ) | < ε
Δ u i ( ι ) = ϱ ^ i ( ι ) ρ i ( χ i ( ι + 1 ) y i ( ι ) ) θ i ϱ ^ i ( ι ) z ^ i 2 ( ι ) η i + ϱ ^ i 2 ( ι )
where ϱ ^ i ( 1 ) is the initial value of ϱ ^ i ( ι ) ; ε is a positive constant.
Remark 3.
In the proposed control algorithm, several parameters, η i , ϑ i , ϖ i , κ i 1 , and κ i 2 , must be properly tuned to ensure satisfactory performance. The regularization factor η i determines the trade-off between control responsiveness and input smoothness: smaller values enhance response speed but may compromise stability, whereas larger values improve robustness. The parameters ϑ i and ϖ i jointly influence the PPD estimation dynamics, balancing estimation accuracy, smoothness, and adaptability. A smaller ϑ i or larger ϖ i accelerates adaptation but increases sensitivity to disturbances. The step sizes ρ i and θ i regulate the tracking and fault-compensation processes, where higher ρ i strengthens tracking performance and θ i moderates the balance between fault rejection and stability. Finally, κ i 1 and κ i 2 govern the update rate of the fault estimator, and appropriate tuning of these parameters ensures reliable and stable fault detection.

4. Stability Analysis

Theorem 1.
Consider the MASs with actuator faults as (1), let Assumption 1 hold and design the controller as (10), the tracking error of the MASs will be bounded if the following inequalities are satisfied:
3 ν ( I œ Φ ( ι ) ( ι ) ) 2 > 3 μ ( I œ Φ ( ι ) ( ι ) ) 2 ν 3 ν j 2 Φ ( ι ) 2 ϰ ( ι ) 2 > ( 1 3 [ ϰ ( ι + 1 ) œ Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] 2 ) μ
Proof of Theorem 1.
This proof is divided into two parts, proving the boundedness of the estimation of PPD and the containment error, respectively.
Part 1: This part consists of two cases. In case 1, the function (10b) is satisfied, and it is obvious that the estimation of PPD is bounded.
In case 2, substituting (10a) into the estimation error e ϱ i ( ι ) = ϱ ^ i ( ι ) ϱ i ( ι ) yields:
e ϱ i ( ι ) = e ϱ i ( ι 1 ) + ϱ i ( ι 1 ) ϱ i ( ι ) + ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 ) [ Δ y i ( ι ) ϱ ^ i ( ι 1 ) Δ u r i ( ι 1 ) ] = ϱ i ( ι 1 ) + ( 1 ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι z 1 ) ) e ϱ i ( ι 1 ) ϱ i ( ι ) + ϱ i ( ι 1 ) Δ u f i ( ι 1 ) ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 )
Then, we can obtain
| e ϱ i ( ι ) | | ϱ i ( ι ) ϱ i ( ι 1 ) | + | ϱ i ( ι 1 ) Δ u f i ( ι 1 ) ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 ) | + | 1 ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 ) | | τ i ( ι 1 ) |
Also, it is easy to gain | ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 ) | ϖ i | Δ u r i ( ι 1 ) | 2 η i | Δ u r i ( ι 1 ) | = ϖ i 2 η i , | ϱ i ( ι ) ϱ i ( ι 1 ) | 2 ς i and | ϱ i ( ι ) | ς i . Thus, we can obtain
| e ϱ i ( ι ) | Γ i | e ϱ i ( ι 1 ) | + 2 ς i + ς i u f m ϖ i 2 η i Γ i 2 | e ϱ i ( ι 1 ) | + Γ i ( 2 ς i + ς i u f m ϖ i 2 η i ) + 2 ς i + ς i u f m ϖ i 2 η i Γ i ι 1 | e ϱ i ( 1 ) | + 2 ς i + ς i u f m ϖ i 2 η i 1 Γ i
where Γ i is a positive constant that can be obtained by selecting ϖ i and η i to satisfy 0 < | 1 ϖ i Δ u r i ( ι 1 ) η i + Δ u r i 2 ( ι 1 ) | Γ i < 1 .
Based on the above analysis, since both e ϱ i ( ι ) and ϱ i ( ι ) are bounded, it follows that ϱ ^ i ( ι ) is also bounded.
Part 2: In this part, we will give the proof of the boundedness of the tracking error e S i ( ι ) . By Theorem 1, if e ^ S i ( ι ) = χ i ( ι ) y i ( ι ) remains bounded as ι , then e S i ( ι ) is also bounded.
From function (10c), we have
U ( ι ) = U ( ι 1 ) + œ ( ι ) e S ( ι ) j ϰ ( ι ) Δ U ^ f ( ι )
where
œ = diag { σ 1 , σ 2 , , σ N } , j = diag { η 1 , η 2 , , η N } , U ( ι ) = [ u 1 ( ι ) , u 2 ( ι ) , , u N ( ι ) ] T ( ι ) = diag { ϱ ^ 1 ( ι ) α 1 + ϱ ^ 1 2 ( ι ) , ϱ ^ 2 ( ι ) α 2 + ϱ ^ 2 2 ( ι ) , , ϱ ^ N ( ι ) α N + ϱ ^ N 2 ( ι ) } , ϰ ( ι ) = diag { ϱ ^ 1 2 ( ι ) α 1 + ϱ ^ 1 2 ( ι ) , ϱ ^ 2 2 ( ι ) α 2 + ϱ ^ 2 2 ( ι ) , , ϱ ^ N ( ι ) 2 α N + ϱ ^ N 2 ( ι ) } e S ( ι ) = [ e ^ S 1 ( ι ) , e ^ S 2 ( ι ) , , e ^ S N ( ι ) ] T , Δ U ^ f ( ι ) = [ Δ u ^ f 1 ( ι ) , Δ u ^ f 2 ( ι ) , , Δ u ^ f N ( ι ) ] T .
Substituting (15) into (4), we can obtain
y ( ι + 1 ) = y ( ι ) + Φ ( ι ) ( œ ( ι ) e S ( ι ) + Δ U f i ( ι ) j ϰ ( ι ) Δ U ^ f ( ι ) )
where Δ U f ( ι ) = [ Δ u f 1 ( ι ) , Δ u f 2 ( ι ) , , Δ u f N ( ι ) ] T and ϱ ( ι ) = diag { ϱ 1 ^ ( ι ) , ϱ 2 ^ ( ι ) , , ϱ N ^ ( ι ) } .
Then, we can obtain
e S ( ι + 1 ) = e S ( ι ) œ Φ ( ι ) œ ( ι ) e S ( ι ) Φ ( ι ) Δ U f ( ι ) + j Φ ( ι ) ϰ ( ι ) Δ U ^ f ( ι ) + χ ¯ ( ι + 1 ) χ ¯ ( ι ) = ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) + ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) Δ U f ( ι ) + Δ χ ¯ ( ι + 1 ) + j Φ ( ι ) ϰ ( ι ) Δ U ˜ f ( ι )
where χ ¯ ( ι ) = [ χ 1 ( ι ) , χ 2 ( ι ) , , χ N ( ι ) ] T .
According to function (5), we have
Δ U ^ f ( ι + 1 ) = ϰ ( ι + 1 ) Δ U ^ f ( ι ) Φ 1 ( ι + 1 ) e S ( ι + 1 )
Then, we can obtain
Δ U ˜ f ( ι + 1 ) = Δ U ^ f ( ι + 1 ) Δ U f ( ι + 1 ) = ϰ ( ι + 1 ) Δ U ^ f ( ι ) Φ 1 ( ι + 1 ) e S ( ι + 1 ) Δ U f ( ι + 1 ) = ϰ ( ι + 1 ) Δ U ˜ f ( ι ) + ϰ ( ι + 1 ) Δ U f ( ι ) Δ U f ( ι + 1 ) Φ 1 ( ι + 1 ) e S ( ι + 1 )
Substituting (17) into (19), we can obtain
Δ U ˜ f ( ι + 1 ) = [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] Δ U ˜ f ( ι ) + [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) ( j × Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) ] Δ U f ( ι ) Φ 1 ( ι + 1 ) ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) Δ U f ( ι + 1 ) × Φ 1 ( ι + 1 ) Δ χ ¯ ( ι + 1 )
Select the Lyapunov function as L S ( ι ) = u e S T ( ι ) e S + μ Δ U ˜ f T ( ι ) Δ U ˜ f ( ι ) , where u > 0 , μ > 0 .
Then, the difference of L can be obtained as
Δ L S ( ι + 1 ) = u e S T ( ι + 1 ) e S ( ι + 1 ) + μ Δ U ˜ f T ( ι + 1 ) Δ U ˜ f ( ι + 1 ) u e S T ( ι ) e S ( ι ) μ Δ U ˜ f T ( ι ) Δ U ˜ f ( ι )
Substituting (17) and (20) into (21), we have
Δ L S ( ι + 1 ) = [ ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) + j Φ ( ι ) ϰ ( ι ) Δ U ˜ f ( ι ) + ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) Δ U f ( ι ) + Δ χ ¯ ( ι + 1 ) ] T × [ ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) + j Φ ( ι ) ϰ ( ι ) Δ U ˜ f ( ι ) + j Φ ( ι ) ϰ ( ι ) Δ U f ( ι ) Φ ( ι ) Δ U f ( ι ) + Δ χ ¯ ( ι + 1 ) ] + μ { [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] Δ U ˜ f ( ι ) + [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) × ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) ] Δ U f ( ι ) Φ 1 ( ι + 1 ) ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) Δ U f ( ι + 1 ) × Φ 1 ( ι + 1 ) Δ χ ¯ ( ι + 1 ) } T { [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] Δ U ˜ f ( ι ) + [ ϰ ( ι + 1 ) × Φ 1 ( ι + 1 ) ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) ] Δ U f ( ι ) Φ 1 ( ι + 1 ) ( I œ Φ ( ι ) œ ( ι ) ) e S ( ι ) Δ U f ( ι + 1 ) Φ 1 ( ι + 1 ) Δ χ ¯ ( ι + 1 ) } ν e S T ( ι ) e S ( ι ) μ Δ U ˜ f T ( ι ) Δ U ˜ f ( ι )
For the boundedness of χ ¯ ( ι ) , Δ U f ( ι ) , Φ ( ι ) and Φ ^ ( ι ) , it is possible to be obtain that ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) Δ U f ( ι ) + Δ χ ¯ M 1 , [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) ( j Φ ( ι ) ϰ ( ι ) Φ ( ι ) ) ] Δ U f ( ι ) Δ U f ( ι + 1 ) Φ 1 ( ι + 1 ) Δ χ ¯ ( ι + 1 ) M 2 , and Φ l ( ι ) Φ ^ ( ι ) Φ u ( ι ) . Then we have
Δ L S ( ι + 1 ) 3 ν ( ( I œ Φ ( ι ) œ ( ι ) ) 2 e S ( ι ) 2 + M 1 2 + j 2 Φ ( ι ) 2 ϰ ( ι ) 2 Δ U ˜ f ( ι ) 2 ) + 3 μ ( [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) × j Φ ( ι ) ϰ ( ι ) ] 2 Δ U ˜ f ( ι ) 2 + M 2 2 + 2 × Φ 1 ( ι + 1 ) 2 ( I œ Φ ( ι ) œ ( ι ) ) 2 e S ( ι ) 2 ) ν e S ( ι ) 2 μ Δ U ˜ f 2 ( 3 ν ( I œ Φ ( ι ) œ ( ι ) ) 2 + 3 μ ( I œ Φ ( ι ) œ ( ι ) ) 2 ν ) e S ( ι ) 2 + ( 3 ν j 2 Φ ( ι ) 2 ϰ ( ι ) 2 + 3 μ [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] 2 μ ) Δ U ˜ f ( ι ) 2 + 3 ν M 1 2 + 3 μ M 2 2 Π 1 e S ( ι ) 2 + Π 2 Δ U ˜ f ( ι ) 2 + Π 3
where Π 1 = 3 ν ( I œ Φ ( ι ) œ ( ι ) ) 2 + 3 q ( I œ Φ ( ι ) œ ( ι ) ) 2 ν , Π 2 = 3 ν j 2 Φ ( ι ) 2 | ϰ ( ι ) 2 μ + 3 q [ ϰ ( ι + 1 ) Φ 1 ( ι + 1 ) j Φ ( ι ) ϰ ( ι ) ] 2 and Π 3 = 3 ν M 1 2 + 3 q M 2 2 .
From the (11), Π 1 and Π 2 are all positive constant. Based on the Lyapunov stability theory, we can obtain Δ L S ( ι + 1 ) 0 if at least one of the following inequalities holds e S ( ι ) > Π 3 Π 1 , Δ U ˜ f ( ι ) > Π 3 Π 1 . Therefore, e S ( ι ) and Δ U ˜ f ( ι ) are bounded. □
Theorem 2.
For the ESO designed as (5), if k i 1 and k i 2 are selected to ensure max { | ( 2 k i 1 + k i 1 2 4 k i 2 ) / 2 | , ( 2 k i 1 + k i 1 2 4 k i 2 ) / 2 } < 1 , then the observer error z i 2 ˜ of ESO is bounded.
Proof of Theorem 2.
From (5), we have
z ˜ i 1 ( ι + 1 ) z ˜ i 2 ( ι + 1 ) = 1 k i 1 1 k i 1 1 z ˜ i 1 ( ι ) z ˜ i 2 ( ι ) + 0 Δ z i 2 ( ι + 1 )
where z ˜ i 1 ( ι ) = z i 1 ( ι ) z ^ i 1 ( ι ) and z ˜ i 2 ( ι ) = z i 2 ( ι ) z ^ i 2 ( ι ) denote the observer errors of ESO. Δ z i 2 ( ι + 1 ) = z i 2 ( ι + 1 ) z i 2 ( ι ) .
Due to max { | ( 2 k i 1 + k i 1 2 4 k i 2 ) / 2 | , ( 2 k i 1 + k i 1 2 4 k i 2 ) / 2 } < 1 is satisfied, the spectral radius of 1 k i 1 1 k i 1 1 is less than 1.
Then, we have 1 k i 1 1 k i 1 1 ψ γ ( 1 k i 1 1 k i 1 1 ) + ϵ c 1 < 1 .
Combined with the boundedness of ϱ i , we can obtain
z ˜ i 1 ( ι + 1 ) z ˜ i 2 ( ι + 1 ) ψ c 1 z ˜ i 1 ( ι ) z ˜ i 2 ( ι ) ψ + c 2 c 1 ι + 1 z ˜ i 1 ( 0 ) z ˜ i 2 ( 0 ) ψ + c 2 1 c 1
where c 2 0 Δ z i 2 ( ι + 1 ) ψ , and the observer errors are bounded. □

5. Simulation

Consider a heterogeneous MAS with three followers and one leader, where the leader’s output is time-varying and non-convergent. The communication topology is shown in Figure 1.
Figure 1. The communication topology of numerical simulation.
Define the dynamics of the leader as:
x 0 ( ι + 1 ) = cos ( π / 300 ) sin ( π / 300 ) sin ( π / 300 ) cos ( π / 300 ) x 0 ( ι ) , y 0 ( ι ) = 0 1 x 0 ( ι ) .
Define the dynamics of the followers as:
y 1 ( ι + 1 ) = u 1 ( ι ) / 5 + y 1 ( ι ) u 1 2 ( ι ) 1 + y 1 2 ( ι )
y 2 ( ι + 1 ) = u 2 ( ι ) / 2 + y 2 ( ι ) u 2 ( ι ) 1 + y 2 3 ( ι )
y 3 ( ι + 1 ) = u 3 ( ι ) + sin ( y 3 ( ι ) )
Moreover, assume the actuator faults occur on follower 1 as
Δ u f 1 = 0.15 sin ( ι π / 300 ) .
The parameters of the distributed observers are selected as α 1 = α 2 = α 3 = 0.3 , β 1 = β 2 = β 3 = 0.4 and σ 1 = σ 2 = σ 3 = 0.4 . The parameters of the ESO are set as κ 11 = 0.54 and κ 12 = 0.08 . The parameters of the adaptive controller are set as ϖ 1 = 0.9 , ϖ 2 = 1 , ϖ 3 = 0.1 , ϑ 1 = 0.3 , ϑ 2 = 0.01 , ϑ 3 = 0.1 , ρ 1 = 0.7 , ρ 2 = 0.99 , ρ 3 = 0.9 , η 1 = 0.001 , η 2 = 0.1 , η 1 = 1 . Moreover, the initial outputs of the followers are selected as y 1 = y 2 = y 3 = 0 and the initial state of the leader is set as y 0 ( 0 ) = [ 0.2 , 0.2 ] T . The initial PPDs of the followers are selected as ϱ ^ 1 ( 0 ) = 2 , ϱ ^ 1 ( 0 ) = 1.7 and ϱ ^ 1 ( 0 ) = 5 .
The performances of the proposed observer are shown in Figure 2, Figure 3 and Figure 4. As illustrated in Figure 2 and Figure 3, the followers can effectively observe the leader’s internal and output dynamics at the 50 time step. Moreover, the leader’s output can also be observed at the 200th time step. These results show that the MAS has been fully decoupled by the distributed observers, and the followers can obtain the leader’s output after the 200th step.
Figure 2. The observational error e A i 2 = A ^ i A 0 2 of the internal dynamics of the leader.
Figure 3. The observational error e C i 2 = C ^ i C 0 2 of the output dynamics of the leader.
Figure 4. The observational error ξ i = y i χ i of the output of the leader.
The tracking performances of the followers are shown in Figure 5, Figure 6 and Figure 7, respectively. From the results of the Figures, based on the distributed observer and ESO, the followers can track the trajectory of the leader after the 300th time step. Moreover, the performance of ESO is shown in Figure 8. From Figure 8, the ESO can estimate the increment of the actuator faults after 300th time step. These results show that the proposed algorithm can deal with the consensus control problem of MASs with unknown actuator faults.
Figure 5. The tracking performance of follower 1.
Figure 6. The tracking performance of follower 2.
Figure 7. The tracking performance of follower 3.
Figure 8. The performance of ESO.
To evaluate the decoupling capability of the proposed algorithm and its effectiveness in preventing error propagation across the system, the parameters of follower 1 were deliberately modified such that it fails to estimate the control error accurately and cannot track the leader’s trajectory. The tracking performance of all followers is illustrated in Figure 9, Figure 10 and Figure 11, while the performance of the ESO is shown in Figure 12. A comparison between Figure 10 and Figure 11 and Figure 6 and Figure 7 indicates that the tracking performances of followers 2 and 3 remains unaffected, despite the degraded performance of follower 1. These results show that the proposed algorithm enables system decoupling through the use of observers. When the ESO of one agent fails, the others remain unaffected, demonstrating the scalability and robustness of the proposed control strategy.
Figure 9. The tracking performance of follower 1 with unsuitable parameters.
Figure 10. The tracking performance of follower 2.
Figure 11. The tracking performance of follower 3.
Figure 12. The performance of ESO.
The tracking performances of the followers under the algorithm in [27] are shown in Figure 13. As we can see in Figure 13, the followers cannot track the leader’s trajectory under the algorithm in [27]. As shown in Figure 14, the fault estimator designed in [27] cannot estimate the actuator faults well. These results show that under the algorithm design in [27], due to the coupling between multiple intelligences, actuator errors propagate through the system, thus affecting the stability of the whole system.
Figure 13. The tracking performances of followers under the algorithm in [27].
Figure 14. The performance of the fault estimator in [27].

6. Conclusions

In this paper, a novel observer-based, fully distributed fault-tolerant consensus control algorithm for MFAC is proposed to address the consensus control problem in nonlinear MASs. The proposed method overcomes the challenge of followers lacking direct access to the leader’s state by utilizing a distributed observer that estimates the leader’s state from local information. This approach decouples the consensus control problem into independent tracking tasks for each agent. An ESO based on a data-driven model is introduced to estimate unknown actuator faults, particularly brake faults, and an adaptive controller is designed for fault compensation. Theoretical analysis confirms the boundedness of observer and tracking errors. Finally, simulation results validate the robustness and effectiveness of the proposed algorithm in fault-tolerant control scenarios. Future work will focus on conducting physical experiments to further validate the proposed method and extending the framework to event-triggered mechanisms and partially connected network topologies.

Author Contributions

Methodology, Y.Z., Y.L. and M.Z.; Software, D.L. and J.C.; Validation, D.L., Y.L. and D.G.; Formal analysis, D.L., S.S. and M.Z.; Investigation, D.L. and Y.L.; Resources, Y.Z., D.L. and S.S.; Data curation, D.G., J.C. and S.S.; Writing—original draft, Y.Z.; Writing—review & editing, D.G.; Visualization, M.Z.; Supervision, S.S.; Funding acquisition, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62303095), and Fundamental Research Funds for the Central Universities (2682025CX080).

Data Availability Statement

Data available on request due to restrictions. The data presented in this study are available on request from the corresponding author, because the data are not publicly available due to specific confidentiality agreements.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Omeiza, D.; Webb, H.; Jirotka, M.; Kunze, L. Explanations in autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 10142–10162. [Google Scholar] [CrossRef]
  2. Feng, T.; Wang, W.; Yang, Y. A survey of world models for autonomous driving. arXiv 2025, arXiv:2501.11260. [Google Scholar] [CrossRef]
  3. Laghari, A.A.; Jumani, A.K.; Laghari, R.A.; Nawaz, H. Unmanned aerial vehicles: A review. Cogn. Robot. 2023, 3, 8–22. [Google Scholar] [CrossRef]
  4. Zuo, Z.; Liu, C.; Han, Q.L.; Song, J. Unmanned aerial vehicles: Control methods and future challenges. IEEE/CAA J. Autom. Sin. 2022, 9, 601–614. [Google Scholar] [CrossRef]
  5. Ahmad, R.; Wazirali, R.; Abu-Ain, T. Machine learning for wireless sensor networks security: An overview of challenges and issues. Sensors 2022, 22, 4730. [Google Scholar] [CrossRef]
  6. Nayak, P.; Swetha, G.; Gupta, S.; Madhavi, K. Routing in wireless sensor networks using machine learning techniques: Challenges and opportunities. Measurement 2021, 178, 108974. [Google Scholar] [CrossRef]
  7. Zhao, W.; Liu, H.; Lewis, F.L.; Wang, X. Data-driven optimal formation control for quadrotor team with unknown dynamics. IEEE Trans. Cybern. 2021, 52, 7889–7898. [Google Scholar] [CrossRef] [PubMed]
  8. Hao, X.; Liang, Y.; Li, T. Distributed estimation for multi-subsystem with coupled constraints. IEEE Trans. Signal Process. 2022, 70, 1548–1559. [Google Scholar] [CrossRef]
  9. Landgren, P.; Srivastava, V.; Leonard, N.E. Distributed cooperative decision making in multi-agent multi-armed bandits. Automatica 2021, 125, 109445. [Google Scholar] [CrossRef]
  10. Li, Y.; Cui, P.; Li, H.; Yang, Z.; Liu, X. High-precision magnetic field control of active magnetic compensation system based on MFAC-RBFNN. IEEE Trans. Instrum. Meas. 2024, 73, 6006510. [Google Scholar] [CrossRef]
  11. Yue, B.F.; Su, M.Y.; Jin, X.Z.; Che, W.W. Event-triggered MFAC of nonlinear NCSs against sensor faults and DoS attacks. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 4409–4413. [Google Scholar] [CrossRef]
  12. Zhou, Q.; Ren, Q.; Ma, H.; Chen, G.; Li, H. Model-free adaptive control for nonlinear systems under dynamic sparse attacks and measurement disturbances. IEEE Trans. Circuits Syst. I Regul. Pap. 2024, 71, 4731–4741. [Google Scholar] [CrossRef]
  13. Corradini, M.L. A robust sliding-mode based data-driven model-free adaptive controller. IEEE Control Syst. Lett. 2021, 6, 421–427. [Google Scholar] [CrossRef]
  14. Zhu, P.; Jin, S.; Bu, X.; Hou, Z.; Yin, C. Model-free adaptive control for a class of MIMO nonlinear cyberphysical systems under false data injection attacks. IEEE Trans. Control Netw. Syst. 2022, 10, 467–478. [Google Scholar] [CrossRef]
  15. Weng, Y.; Zhang, Q.; Cao, J.; Yan, H.; Qi, W.; Cheng, J. Finite-time model-free adaptive control for discrete-time nonlinear systems. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 4113–4117. [Google Scholar] [CrossRef]
  16. Wang, L.; Li, X.; Zhang, R.; Gao, F. Reinforcement learning-based optimal fault-tolerant tracking control of industrial processes. Ind. Eng. Chem. Res. 2023, 62, 16014–16024. [Google Scholar] [CrossRef]
  17. Shi, H.; Ma, J.; Liu, Q.; Li, J.; Jiang, X.; Li, P. Model-free output feedback optimal tracking control for two-dimensional batch processes. Eng. Appl. Artif. Intell. 2025, 143, 109989. [Google Scholar] [CrossRef]
  18. Liu, A.; Zhao, H.; Song, T.; Liu, Z.; Wang, H.; Sun, D. Adaptive control of manipulator based on neural network. Neural Comput. Appl. 2021, 33, 4077–4085. [Google Scholar] [CrossRef]
  19. Hu, L.; Yang, Y.; Tang, Z.; He, Y.; Luo, X. FCAN-MOPSO: An improved fuzzy-based graph clustering algorithm for complex networks with multiobjective particle swarm optimization. IEEE Trans. Fuzzy Syst. 2023, 31, 3470–3484. [Google Scholar] [CrossRef]
  20. Li, F.; Hou, Z. Distributed model-free adaptive control for MIMO nonlinear multiagent systems under deception attacks. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 2281–2291. [Google Scholar] [CrossRef]
  21. Zhang, S.; Ma, L.; Yi, X. Model-free adaptive control for nonlinear multi-agent systems with encoding-decoding mechanism. IEEE Trans. Signal Inf. Process. Netw. 2022, 8, 489–498. [Google Scholar] [CrossRef]
  22. Liang, J.; Bu, X.; Cui, L.; Hou, Z. Event-triggered asymmetric bipartite consensus tracking for nonlinear multi-agent systems based on model-free adaptive control. IEEE/CAA J. Autom. Sin. 2022, 10, 662–672. [Google Scholar] [CrossRef]
  23. Chen, R.Z.; Li, Y.X.; Hou, Z.S. Distributed model-free adaptive control for multi-agent systems with external disturbances and DoS attacks. Inf. Sci. 2022, 613, 309–323. [Google Scholar] [CrossRef]
  24. Ma, Y.S.; Che, W.W.; Deng, C.; Wu, Z.G. Distributed model-free adaptive control for learning nonlinear MASs under DoS attacks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 1146–1155. [Google Scholar] [CrossRef]
  25. Zhao, H.; Peng, L.; Yu, H. Model-free adaptive consensus tracking control for unknown nonlinear multi-agent systems with sensor saturation. Int. J. Robust Nonlinear Control 2021, 31, 6473–6491. [Google Scholar] [CrossRef]
  26. Liu, T.; Hou, Z. Model-free adaptive containment control for unknown multi-input multi-output nonlinear MASs with output saturation. IEEE Trans. Circuits Syst. I Regul. Pap. 2023, 70, 2156–2166. [Google Scholar] [CrossRef]
  27. Wang, Y.; Wang, Z. Distributed model free adaptive fault-tolerant consensus tracking control for multiagent systems with actuator faults. Inf. Sci. 2024, 664, 120313. [Google Scholar] [CrossRef]
  28. Chen, C.; Lewis, F.L.; Xie, K.; Lyu, Y.; Xie, S. Distributed output data-driven optimal robust synchronization of heterogeneous multi-agent systems. Automatica 2023, 153, 111030. [Google Scholar] [CrossRef]
  29. Huang, J. The Cooperative Output Regulation Problem of Discrete-Time Linear Multi-Agent Systems by the Adaptive Distributed Observer. IEEE Trans. Autom. Control 2017, 62, 1979–1984. [Google Scholar] [CrossRef]
  30. Li, H.; Wang, H.; Xu, Y.; Chen, J. Event-triggered model-free adaptive consensus tracking control for nonlinear multi-agent systems under switching topologies. Int. J. Robust Nonlinear Control 2022, 32, 8646–8669. [Google Scholar] [CrossRef]
  31. Bu, X.; Guo, J.; Cui, L.; Hou, Z. Event-triggered model-free adaptive containment control for nonlinear multiagent systems under DoS attacks. IEEE Trans. Control Netw. Syst. 2023, 11, 1845–1857. [Google Scholar] [CrossRef]
  32. Li, F.; Hou, Z. Learning-based model-free adaptive control for nonlinear discrete-time networked control systems under hybrid cyber attacks. IEEE Trans. Cybern. 2022, 54, 1560–1570. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.