Next Article in Journal
Fractal Prediction of Surface Morphology Evolution During the Running-In Process Using Monte Carlo Simulation
Previous Article in Journal
Fractal-Controlled Multiscale Evolution of Gas–Water Two-Phase Flow Patterns in Rough Sheared Fractures: A Pressure-Based Predictive Framework
Previous Article in Special Issue
Multi-Objective Optimization and Load-Flow Analysis in Complex Power Distribution Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions

1
School of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu 610051, China
2
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 510640, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2026, 10(2), 98; https://doi.org/10.3390/fractalfract10020098
Submission received: 22 December 2025 / Revised: 23 January 2026 / Accepted: 27 January 2026 / Published: 2 February 2026
(This article belongs to the Special Issue Fractional Dynamics and Control in Multi-Agent Systems and Networks)

Abstract

The coordination of switched fractional-order nonlinear heterogeneous multi-agent systems (FONHMASs) with cooperative and antagonistic interactions presents significant challenges due to the complex coupling of switched fractional-order dynamics. Crucially, existing control methods typically rely on integer-order assumptions and precise system modeling, which are inadequate for capturing the inherent non-local memory behaviors of fractional dynamics. Furthermore, they generally assume fixed agent dynamics, and cannot be applied to switched FONHMASs where the continuity of agents’ dynamics is violated at switching instants. Considering the constraints of precise modeling difficulties and limited task time for switched FONHMASs in practice, a distributed D α -type iterative learning control (ILC) protocol is proposed to achieve bipartite consensus in the presence of cooperative and antagonistic interactions. Also, without relying on repetitive initial conditions, based on a presented initial state learning mechanism and D α -type ILC protocol, the bipartite consensus error convergence property with each iteration is achieved. Additionally, in consideration of external disturbances, the robustness of the iterative bipartite consensus controller for the switched FONHMASs is analyzed. Simulation results confirm that the switched FONHMASs achieve the convergence and robustness of the bipartite consensus errors along the iteration direction. In addition, the proposed D α -type ILC protocol achieves a maximum root-mean-square-error (MRMSE) of 0.0168 in time domain, significantly outperforming the integer-order ILC (MRMSE = 0.3601) and fractional-order PID control (MRMSE = 0.7550), confirming its superiority.

1. Introduction

Over the past few decades, the coordinated control of multi-agent systems (MASs) has garnered considerable research interest from both the academic and industrial communities, e.g., aerial robotics [1,2,3], smart grids [4], underwater robotics [5] and intelligent transportation systems [6]. Among the various cooperative control problems, consensus, which aims to ensure that all agents in a undirected/directed communication network converge to a common state/output through local information exchange, is regarded as the fundamental theoretical cornerstone [7,8]. While centralized control or broadcast schemes can simplify the coordination problem by assuming that a global reference signal is directly available to all agents [9], such approaches rely on the restrictive assumption of global connectivity [10,11]. This imposes a heavy communication burden on the leader and renders the system vulnerable to single-point failures, particularly in large-scale or geographically dispersed applications. In contrast, distributed control strategies, which rely solely on local neighbor-to-neighbor interactions, offer superior scalability and robustness. By requiring only a communication spanning tree rather than global accessibility, distributed strategies are more practical for FONHMASs where communication resources are limited and the topology is time-varying. However, early research predominantly focused on agents governed by integer-order differential equations, advancements in material science and complex network theory have revealed that many physical systems—such as viscoelastic materials [12], dielectric polarization [13], and electromagnetic waves [14]—exhibit inherent non-local, memory-dependent, and hereditary properties [15]. Standard integer-order differential equations are often inadequate for accurately describing such dynamics in [12,13,14]. Consequently, fractional-order calculus, which generalizes differentiation and integration to arbitrary orders, has emerged as a powerful mathematical tool [16]. Studies have demonstrated that incorporating fractional dynamics into MASs—yielding fractional-order MASs (FOMASs)—not only improves modeling accuracy but also presents new challenges for stability analysis, as classical Lyapunov theory for integer-order systems is not directly applicable [17].
In recent years, various control strategies have been proposed for the coordination of FOMAS. Regarding observer-based control, Yu et al. [18] designed an observer to achieve tracking consensus for second-order multi-agent systems with fractional orders less than two. Addressing the dynamic differences among agents, Yang et al. [19] investigated the containment control problem for heterogeneous FOMASs. Furthermore, to ensure high-precision tracking performance while reducing communication resource consumption, Wang et al. [20] presented an event-triggered controller to achieve perfect consensus tracking for non-identical FOMASs. Recently, Yan et al. [21] further extended this field by proposing an observer-based boundary control strategy to solve the consensus problem of FOMAS. Although significant progress has been made in the consensus of FOMASs, most existing results rely on the restrictive assumption of cooperative interactions [18,19,20,21], where all edge weights in the communication graph are non-negative. However, in many real-world social, biological, and economic networks, interactions are often characteristically antagonistic, embodying complex relationships such as trust versus distrust, symbiosis versus competition, or collaboration versus rivalry [22]. To model such complex interactions mathematically, a communication network with positive or negative edge weights was introduced for the FOMASs. In such cooperative and antagonistic network, the consensus objective evolves into bipartite consensus, where agents are classified into two disjoint clusters: (1) agents within the same cluster converge to the same state/output, (2) agents in different clusters converge to state/output with the same magnitude but opposite signs [23]. In addition, the realization of bipartite consensus strictly relies on the structural balance of the cooperative and antagonistic communication network [24]. Up to now, the bipartite consensus under cooperative and antagonistic network has been widely investigated for the integer-order MASs. For instance, a prescribed-time control has emerged, employing time-varying scaling functions to ensure bipartite consensus convergence within a user-defined duration independent of initial conditions [25]. The event-triggered mechanisms have been integrated to reduce communication frequency while maintaining system stability for the bipartite consensus of MASs [26]. Considering the prevalence of cyber threats in the cooperative and antagonistic network, resilient control strategies against Denial-of-Service (DoS) attacks have been developed [27]. Despite substantial progress made in the bipartite consensus of MASs, achieving bipartite consensus for more complex FOMASs under cooperative and antagonistic network remains a challenging frontier.
On the other hand, MASs are often confined to single-task scenarios. To address multi-task scenarios, the concept of switch MASs has been introduced in recent years [28,29,30], where agents can dynamically adjust their structures according to different tasks. Switched MASs, characterized as hybrid systems integrating continuous dynamics with discrete switching signals, possess the distinct capability to model agent transitions across varying dynamic modes. In contrast to traditional non-switched systems, switched MASs enable agents to flexibly adjust their dynamics in response to task phases, such as payload variations [31], or mode switching [32], thereby demonstrating superior adaptability and flexibility. Thus, switched MASs have garnered significant research attention as they effectively handle practical multi-task scenarios. For example, Xue et al. [33] investigated the practical output synchronization for asynchronously switched MASs. They introduced a piecewise average dwell time (ADT) method to handle non-attenuating state impulses and developed a regulation strategy to adapt to fast-switching perturbations. However, this approach is primarily designed for linear dynamics and relies on the assumption that perturbations can be regulated back to slow switching. For the nonlinear dynamics, Zou et al. [34] proposed a novel adaptive protocol for second-order switched nonlinear MASs using neural networks (NNs). Their method achieves practical finite-time consensus for heterogeneous agents without requiring strict Lipschitz conditions. In [28], Li et al. focused on non-strict feedback switched MASs subject to input saturations. By utilizing Gaussian error functions and constructing a common Lyapunov function, a adaptive consensus under arbitrary switching mechanisms was achieved. Furthermore, to optimize communication resources in dynamic environments, a dual-switch-based dynamic event-triggered mechanism was developed in [29]. This approach handles both model switching and topological switching simultaneously while accommodating non-zero leader inputs. However, the design is largely tailored for linear system models and assumes bounded leader inputs. Despite these significant contributions for switched MASs, existing literature predominantly restricts to the integer-order dynamics. However, the cooperative control of switched FOMASs remains largely unexplored. The analytical tools developed for integer-order switched MASs, particularly regarding Lyapunov stability analysis cannot be directly applied to fractional-order domains due to the non-local nature of fractional operators [20]. Consequently, developing robust control protocols for FOSMASs that simultaneously accommodate switching behaviors and fractional dynamics represents a critical open problem and a necessary evolution in the cooperative control theory of MASs.
A primary challenge in designing control protocols for FOMASs [18,19,20,21] lies in the inherent difficulty of obtaining an accurate dynamics. This difficulty arises from the non-local characteristics of fractional-order operators, coupled with unmodeled dynamics, parametric uncertainties, and external disturbances, which collectively hinder precise system characterization and complicate the synthesis of robust control strategies. To address this restriction, an iterative learning control (ILC) technique is induced for the bipartite consensus of switched FOMASs. It is also worth distinguishing the proposed ILC framework from other prevalent model-free control and optimization schemes. For instance, the safe experimentation dynamics (SED) algorithm has proven effective for data-driven control of MIMO systems by maintaining stability during experimentation [35]. Distributed stochastic gradient descent (SGD) algorithms have been extensively studied for their convergence properties in distributed coordination networks [36]. Furthermore, perturbation-based methods such as norm-limited SPSA have been applied to multi-robot systems [37], and smoothed functional algorithms (SFA) have been utilized for simulation optimization problems [38]. While these methods are powerful for parameter tuning and policy search, they typically treat the system as a black box and rely on stochastic estimation or gradient approximations. In contrast, ILC is specifically tailored for systems performing repetitive tasks. By explicitly exploiting the temporal error information from previous iterations, ILC can achieve perfect tracking for fractional-order dynamics more efficiently than general stochastic search methods. Moreover, for switched multi-agent systems, the distributed ILC protocol offers a direct mechanism to compensate for non-local fractional dynamics through local interactions, avoiding the complex gradient estimation often required by distributed variations of SPSA or SGD.
It is noted that ILC is a powerful strategy particularly suited for systems that perform repetitive tasks, relying on its ability to improve tracking performance from trial to trial without the need for precise system models [39]. This model-independent nature allows ILC to achieve perfect tracking over finite time intervals, making it a valuable tool for complex FOMASs with unknown dynamics. For the FOMASs, recent research has expanded ILC applications, such as open-closed-loop D α -type ILC [40], P D α -type ILC [41], open-loop D α -type ILC [42], PI-type ILC [43], and event-triggered ILC [20]. The aforementioned ILC-based methods for FOMASs are summarized in Table 1. From Table 1, it is noted that these studies assume that the agents’ dynamics remain non-switched, which cannot be directly applied to solving the cooperative control problems of switched FOMASs. This is because the continuity of dynamics are violated at the switching points [30], such as the Lipschitz condition of nonlinear functions. Therefore, the ILC approach for switched FOMASs still requires further exploration.
To summarize, despite the extensive literature on switched MAS coordination, significant gaps remain in addressing the challenges associated with switched FOMASs. Furthermore, agents in such systems often exhibit nonlinear and heterogeneous characteristics to accommodate complex task scenarios, and these systems are referred to as switched fractional-order nonlinear heterogeneous MASs (FONHMASs). Motivated by these observations, this paper investigates the bipartite consensus problem for FONHMASs over cooperative and antagonistic communication networks. Principal contributions of this work are presented as:
  • In contrast to the conventional FOMASs discussed in [18,19,20], the agents in the switched FONHMASs with cooperative and antagonistic interactions exhibit switching dynamic behaviors. Moreover, the global Lipschitz condition for the nonlinear agent dynamics is relaxed, requiring satisfaction only within each switching sub-interval rather than over the entire task cycle.
  • A novel fractional-order distributed ILC protocol is proposed for the bipartite consensus of the switched FONHMASs, with a rigorous proof guaranteeing the asymptotic convergence of the bipartite consensus tracking errors along the iteration axis. Unlike conventional ILC approaches for FOMASs [20,40,41], the proposed ILC strategy eliminates the strict requirement for identical iterative initial conditions by employing a novel initial state learning law.
  • The theoretical framework is extended to analyze the robustness of the proposed scheme against bounded external disturbances. Rigorous analysis demonstrates that the bipartite consensus error remains uniformly bounded as the iteration number increases, even in the presence of non-repetitive iterative initial states and external disturbances. Also, extensive simulation results demonstrate the effectiveness and robustness of the proposed ILC method for switched FONHMASs.
The structure of this paper are: Section 2 presents the system formulation of the FONHMASs, and Section 3 gives the main results of the proposed ILC method. The simulation is conducted in Section 4, and Section 5 concludes the whole paper.

2. Preliminaries and Problem Formulation

This paper utilizes algebraic graph theory to represent communication networks of MASs with cooperative and antagonistic interactions. Additionally, it introduces fractional-order calculus and formulates the dynamics of switched FONHMASs.

2.1. Preliminaries

Algebraic graph theory serves as the framework for modeling inter-agent data exchange. Here, the network architecture is conceptualized as a directed graph, denoted by G = ( Φ , E , A ) . Here, Φ = { ϕ 1 , , ϕ N } constitutes the set of agents, while E Φ × Φ represents the set of edges. An ordered pair ( ϕ j , ϕ i ) E denotes a directed information flow from agent j to agent i. The connectivity of the network is characterized by the adjacency matrix A = [ a i j ] R N × N . Specifically, a i j 0 implies that agent i receives information from agent j, i.e., ( ϕ i , ϕ j ) E ; otherwise, a i j = 0 . We assume the communication graph is no self-loops exist, i.e., a i i = 0 .
To incorporate a leader agent, indexed as 0, we define an extended graph G ¯ = { Φ { 0 } , E ¯ , A ¯ } . We designate h j as a weight factor to characterize the connectivity between the leader and the j-th follower. This weight factor is assigned a value of h j = 1 provided that agent j is connected to the leader; conversely, ω j = 0 denotes no direct link. Consequently, the leader adjacency matrix is defined as H = diag ( h 1 , , h N ) R N × N . Let N j be the neighbor set of agent j. Finally, a Laplacian matrix of G is formulated as L = [ l i j ] R N × N , where l i i = l N i | a i l | and l i j = a i j , ( i j ).
The bipartite consensus involving both cooperative and antagonistic interactions in the switched FONHMASs. A signed graph G is employed to characterize such interactions. It defines G as structurally balanced if the node set Φ can be segregated into two mutually exclusive groups, Φ 1 and Φ 2 , satisfying the conditions Φ 1 Φ 2 = Φ and Φ 1 Φ 2 = . Under this partition, the coupling weights satisfy a i j 0 for agents within the same subgroup, i.e., ϕ i , ϕ j Φ 1 or ϕ i , ϕ j Φ 2 . And a i j 0 for agents in different subgroups, i.e., ϕ i Φ 1 , ϕ j Φ 2 or ϕ i Φ 2 , ϕ j Φ 1 . To facilitate the analysis, we introduce a signature variable δ ( j ) associated with each agent ϕ j , defined as δ ( j ) = 1 if ϕ j Φ 1 , and δ ( j ) = 1 if ϕ j Φ 2 .
In addition, the communication graph of the MASs is treated as a time-dependent graph in this paper. Specifically, the communication topology switches within a finite set of possible configurations, denoted by S = { G ¯ 1 , G ¯ 2 , , G ¯ M } . We impose the constraint that every candidate topology G ¯ p S is structurally balanced.
Assumption 1. 
It is assumed that each graph G ¯ p contains a directed spanning tree rooted at the leader (i.e., agent 0).
This assumption ensures that the leader acts as a global information source with no incoming edges, maintaining a directed path to every follower node.

2.2. Fractional-Order Calculus

This part introduces the fundamental concepts of fractional calculus, including definitions of integrals and derivatives, followed by key lemmas related to the solvability of fractional nonlinear equations.
Definition 1 
([44]). Let f : [ t 0 , T ] R be a continuous function. The Riemann-Liouville fractional integral of order α > 0 for f is given by:
I t 0 α f ( t ) 1 Γ ( α ) t 0 t ( t τ ) α 1 f ( τ ) d τ , t > t 0 .
In (1), Γ ( · ) is the Gamma function, i.e., Γ ( α ) = 0 e s s α 1 d s .
Definition 2 
([44]). For a positive order α satisfying i 1 < α < i ( i Z + ), the Caputo fractional derivative is expressed as
D t α t 0 C f ( t ) I t 0 n α f ( n ) ( t ) = 1 Γ ( n α ) t 0 t ( t τ ) n α 1 f ( n ) ( τ ) d τ ,
with n = α being the least integer upper bound of α, and f ( n ) ( t ) denotes the standard n-th order derivative d n f ( t ) d t n . For the sake of brevity, the notation D t α t 0 C will be abbreviated as D t α hereinafter.
Lemma 1 
([45]). Let f ( x , t ) be continuous regarding t [ t 0 , T ] and satisfy the Lipschitz condition regarding x. Given 0 < α < 1 , we consider the following fractional initial value problem
D t α x ( t ) = f ( x ( t ) , t ) , x ( t 0 ) = x 0 ,
can be equivalently transformed into Volterra nonlinear integral equation
x ( t ) = x 0 + 1 Γ ( α ) t 0 t ( t τ ) α 1 f ( x ( τ ) , τ ) d τ , t [ t 0 , T ] .

2.3. Switched FONHMASs

To cope with complex and volatile practical conditions, the switching dynamic mechanism is introduced in the FONHMASs, this is termed as switched FONHMASs. Prominent examples include multi-modal robotic systems, where dynamic equations vary drastically between aerial and ground configurations [31], and multi-modal industrial processes, such as chemical reactions exhibiting distinct characteristics across different stages [46]. By orchestrating different subsystems via a switching signal, this framework precisely captures these abrupt, cross-modal transitions and discontinuities, thereby transcending the limitations of single-mode models. According to the dynamics of nonlinear FOMASs in [47] and considering a class of FONHMASs operating repeatedly over a finite time interval, the dynamics of the j-th follower ( j { 1 , , N } ) is described as a switched fractional-order state-space model:
D t α x ( j ) ( t , σ ) = f s ( j ) ( t ) ( x ( j ) ( t , σ ) , t ) + B s ( j ) ( t ) u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = C s ( j ) ( t ) x ( j ) ( t , σ ) ,
where t [ 0 , T ] denotes the continuous time, and σ Z + represents the iteration index. The term D t α ( · ) is the Caputo derivative with α ( 0 , 1 ) . x ( j ) ( t , σ ) R n , u ( j ) ( t , σ ) R q , and y ( j ) ( t , σ ) R m represent the state, control input, and system output vectors, respectively. The matrices B s ( j ) ( t ) R n × q and C s ( j ) ( t ) R m × n are determined by the switching signal s ( j ) ( t ) , and f s ( j ) ( t ) ( · ) R n denotes an unknown nonlinear function associated with the active mode. The switching law s ( j ) ( t ) M = { 1 , 2 , , M } and is defined piece-wise as s ( j ) ( t ) = p , for t [ t p 1 , t p ) , and p = 1 , , M , where 0 = t 0 < t 1 < < t M = T represents the switching sequence. The leader agent provides a reference output trajectory y d ( t ) R m for the j-th follower. Associated with this trajectory are the desired state x d ( t ) R n and the control input u d ( t ) R q . The schematic diagram of the switched FONHMASs (5) is shown in Figure 1.
In the following, we outline the mathematical preliminaries, comprising key definitions and assumptions, which are instrumental to the main results.
Assumption 2. 
For any switching interval [ t p 1 , t p ) , the nonlinear function f p ( j ) ( x , t ) associated with agent j and mode p { 1 , , M } satisfies the Lipschitz condition. This implies the existence of a positive scalar l p ( j ) for which the following inequality holds:
f p ( j ) ( x ˜ , t ) f p ( j ) ( x ^ , t ) l p ( j ) x ˜ x ^ ,
holds for all x ˜ , x ^ R n and t [ t p 1 , t p ) .
Definition 3. 
For a vector-valued continuous function μ : [ 0 , T ] R n , the λ-norm is
μ ( · ) λ sup t [ 0 , T ] e λ t μ ( t ) , λ > 0 .
When t [ t α , t β ] , the λ-norm of μ ( t ) is represented as
μ ( · ) λ | [ t α , t β ] sup t [ t α , t β ] e λ t μ ( t ) , λ > 0 .
Lemma 2 
([48]). Suppose that x ( t ) , c ( t ) , and a ( t ) represent real-valued continuous functions over the interval [ 0 , T ] , subject to the constraint a ( t ) 0 . If
x ( t ) c ( t ) + 0 t a ( τ ) x ( τ ) d τ , t [ 0 , T ] ,
then
x ( t ) c ( t ) + 0 t a ( τ ) c ( τ ) e τ t a ( s ) d s d τ .
Furthermore, if c ( t ) is a non-decreasing function, the inequality (10) simplifies to
x ( t ) c ( t ) e 0 t a ( τ ) d τ , t [ 0 , T ] .
Lemma 3 
([49]). Consider two non-negative real sequences { a σ } and { b σ } . If the inequality
a σ + 1 ρ a σ + b σ , σ Z + ,
holds with 0 ρ < 1 , then we have lim σ a σ b 1 ρ , where lim sup σ + b σ b . In addition, if lim σ b σ = 0 , then we have lim σ a σ = 0 .
Definition 4. 
Bipartite consensus tracking is deemed accomplished provided that, for all t [ 0 , T ] , the output trajectories fulfill the condition:
lim σ y ( i ) ( t , σ ) y ( j ) ( t , σ ) = 0 , i , j Φ 1 , o r i , j Φ 2 lim σ y ( i ) ( t , σ ) + y ( j ) ( t , σ ) = 0 , i Φ 1 , j Φ 2 , o r j Φ 1 , i Φ 2 .
Equivalently, this objective can be unified as:
lim σ δ ( i ) y ( i ) ( t , σ ) y ( j ) ( t , σ ) = 0 ,
where Φ 1 Φ 2 = Φ represents the structural bipartition of the graph.
Control objective: Based on the mathematical preliminaries and system descriptions established above, the core control problem addressed in this paper can be formally stated. The bipartite consensus control objective is to design a distributed ILC protocol u ( j ) ( t , σ ) for the switched FONHMASs (5) subject to cooperative and antagonistic interactions. Specifically, the control goal is to ensure that the system achieves bipartite consensus tracking as defined in Definition 4, i.e.,
lim σ ( δ ( j ) y ( j ) ( t , σ ) y d ( t ) ) = 0 , t [ 0 , T ] .
where y d ( t ) is the trajectory of the leader agent.

3. Main Results

This section develops a control strategy that ensures the bipartite consensus described in Definition 4. A specific initial state learning mechanism is incorporated to address the iteration-varying initial state errors. Finally, a theoretical analysis is provided to verify the convergence and robustness of the proposed method.
To facilitate the control design, the bipartite consensus error for the j-th agent is defined as
ξ ( j ) ( t , σ ) = l N j | a j l | sgn ( a j l ) y ( l ) ( t , σ ) y ( j ) ( t , σ ) + h j δ j y d ( t ) y ( j ) ( t , σ ) ,
where a j l represents the element of A , and h j 0 is the element H . The tracking errors involving cooperative and antagonistic agents are given by
e ( j ) ( t , σ ) = δ j y d ( t ) y ( j ) ( t , σ ) ,
where δ j { 1 , 1 } denotes the bipartition sign variable.
By substituting (16) into (15) and utilizing the structural property | a j l | δ j = a j l δ l of structurally balanced signed graphs, the consensus error can be reformulated in terms of the tracking errors:
ξ ( j ) ( t , σ ) = l N j a j l y ( l ) ( t , σ ) | a j l | δ j y d ( t , σ ) + | a j l | δ j y d ( t , σ ) | a j l | y ( j ) ( t , σ ) + b j e ( j ) ( t , σ ) = l N j a j l y ( l ) ( t , σ ) a j l δ l y d ( t , σ ) + | a j l | e ( j ) ( t , σ ) + b j e ( j ) ( t , σ ) = l N j | a j l | e ( j ) ( t , σ ) a j l e ( l ) ( t , σ ) + b j e ( j ) ( t , σ ) .
For the purpose of global analysis, let us define the compact state, input, and error vectors as x ( t , σ ) = [ x ( 1 ) T , , x ( N ) T ] T R N n , u ( t , σ ) = [ u ( 1 ) T , , u ( N ) T ] T R N q , e ( t , σ ) = [ e ( 1 ) T , , e ( N ) T ] T R N m , and ξ ( t , σ ) = [ ξ ( 1 ) T , , ξ ( N ) T ] T R N n . Consequently, the bipartite consensus error dynamics can be written in a compact form as
ξ ( t , σ ) = ( L s ( t ) + H s ( t ) ) I m e ( t , σ ) ,
where ⊗ denotes the Kronecker product. And the compact form of (16) is
e ( t , σ ) = Δ y d ( t ) y ( t , σ ) ,
where Δ = diag ( δ 1 , δ 2 , , δ N ) R N × N and y ( t , σ ) = [ y ( 1 ) T , , y ( N ) T ] T R N m .
Remark 1. 
It is worth noting that a broadcast control scheme could simplify the synchronization problem by transmitting the reference y d ( t ) to all agents directly. However, such a scheme relies on the strong assumption that the leader has global connectivity to every agent. In contrast, the distributed ILC protocol proposed in this paper only requires a directed spanning tree (Assumption 1), making it suitable for large-scale FONHMASs where global broadcast is infeasible due to limited communication range or bandwidth. Furthermore, our protocol explicitly leverages the local cooperative and antagonistic couplings to achieve bipartite consensus, whereas a broadcast scheme would treat agents as isolated subsystems.

3.1. Initial State Learning Mechanism

Furthermore, to relax the strict identical initial condition assumption, the following initial state learning mechanism is employed [50]:
x ( j ) ( 0 , σ + 1 ) = x ( j ) ( 0 , σ ) + B s ( j ) ( 0 ) ς ( j ) ξ ( j ) ( 0 , σ ) ,
where ς ( j ) R q × m is the learning gain matrix to be designed.
Remark 2. 
It is worth noting that the initial state learning mechanism (20) requires the adjustability of the agents’ initial states, and it maybe challenging in practical applications where full state accessibility is not guaranteed. However, for certain electromechanical systems (e.g., the multi-motor systems in Example IV), key state variables such as position and velocity can often be reset or calibrated. For systems with unmeasurable internal states, future research may consider combining this scheme with state observers or input-compensation strategies to relax this constraint.
Theorem 1. 
Under Assumptions 1 and 2, consider the switched FONHMASs (5) with an arbitrary initial state x ( j ) ( 0 , σ ) . If there exists a gain matrix Ξ R N q × N m such that
I + C 1 B 1 Ξ ( L 1 + H 1 ) I m 1 ρ < 1 ,
holds for all iteration steps σ Z + , then the initial tracking error converges asymptotically along the iteration axis, i.e., lim σ + e ( 0 , σ ) = 0 , under the initial state learning mechanism (20).
Proof of Theorem 1. 
At the initial instant, the first subsystem of the witched FONHMASs (5) is activated, i.e., s ( j ) ( 0 ) = 1 ( j ) . To analyze the convergence properties of the initial tracking error e ( 0 , σ ) , premultiplying the proposed initial state learning law (20) by the output matrix C 1 ( j ) B 1 ( j ) yields the following learning law for the j-th agent:
y ( j ) ( 0 , σ + 1 ) = y ( j ) ( 0 , σ ) + C 1 ( j ) B 1 ( j ) ς ( j ) ξ ( j ) ( 0 , σ ) .
By defining the compact forms of output vector y ( 0 , σ ) = [ y ( 1 ) T , , y ( N ) T ] T , input matrix B s ( t ) = diag B s ( 1 ) ( t ) , , B s ( N ) ( t ) R N n × N q , and output matrix C s ( t ) = diag C s ( 1 ) ( t ) , , C s ( N ) ( t ) R N m × N n . Thus, Equation (22) can be rewritten in a compact form as
y ( 0 , σ + 1 ) = y ( 0 , σ ) + C 1 B 1 Ξ ( L 1 + H 1 ) I m e ( 0 , σ + 1 ) ,
where Ξ = diag ( ς ( 1 ) , ς ( 2 ) , , ς ( N ) ) R N q × N m . Consequently, the Δ y d ( t ) is subtracted from both sides of Equation (23), we have
e ( 0 , σ + 1 ) = e ( 0 , σ ) C 1 B 1 Ξ ( L 1 + H 1 ) I m e ( 0 , σ + 1 ) .
We rearrange the terms in (24) by moving the dependent term to the left-hand side to obtain
e ( 0 , σ + 1 ) = I + C 1 B 1 Ξ ( L 1 + H 1 ) I m 1 e ( 0 , σ ) .
Taking the norm on both sides of (25) yields the following inequality:
e ( 0 , σ + 1 ) I + C 1 B 1 Ξ ( L 1 + H 1 ) I m 1 e ( 0 , σ ) .
According to the contraction mapping principle, strictly decreasing convergence of the error norm is guaranteed if the coefficient matrix satisfies the spectral condition provided in Lemma 1. Specifically, if the condition (21) is satisfied, it follows recursively that e ( 0 , σ ) ρ σ e ( 0 , 0 ) . Since 0 ρ < 1 , we conclude that lim σ + e ( 0 , σ ) = 0 . The proof of Theorem 1 is completed. □

3.2. Dα-Type ILC Controller

In order to realize the bipartite consensus tracking objective, a distributed D α -type ILC protocol is proposed for the j-th agent as
u ( j ) ( t , σ + 1 ) = u ( j ) ( t , σ ) + γ s ( j ) ( t ) D t α ξ ( j ) ( t , σ ) ,
where γ s ( j ) ( t ) R n × m is the learning gain dependent on the switching mode. By recalling the relationship between the consensus error and tracking error ξ ( t , σ ) = [ ( L s ( t ) + H s ( t ) ) I m ] e ( t , σ ) , the controller defined in (27) is cast into the following compact representation, i.e.,
u ( t , σ + 1 ) = u ( t , σ ) + Γ s ( t ) ( L s ( t ) + H s ( t ) ) I m D t α e ( t , σ ) ,
where Γ s ( t ) = diag { γ s ( 1 ) ( t ) , , γ s ( N ) ( t ) } R N n × N m . To facilitate a better understanding, the block diagram of the proposed Dα-type ILC protocol (27) with the iterative learning mechanism (20). is presented as Figure 2.
Theorem 2. 
Under Assumptions 1 and 2, consider the switched FONHMASs (5). If the D α -type ILC controller (27) is used and the gain matrix Γ p satisfies the condition:
[ I + Γ p ( L p + H p ) I m C p B p ] 1 ρ < 1 ,
for all p { 1 , 2 , , M } , then the tracking error converges asymptotically along the iteration axis, i.e., lim σ + e ( · , σ ) = 0 .
Proof of Theorem 2. 
The compact form of the FONHMASs (5) is
D t α x ( t , σ ) = f s ( t ) ( x ( t , σ ) , t ) + B s ( t ) u ( t , σ ) y ( t , σ ) = C s ( t ) x ( t , σ )
According to Lemma 1 (i.e., Equation (4)), for any t [ t p 1 , t p ] within the p-th switching interval (i.e., s ( t ) = p ), the sate solution is
x ( t , σ ) = x ( t p 1 , σ ) + 1 Γ ( α ) t p 1 t p ( t τ ) α 1 f p ( x ( τ , σ ) , τ ) + B p u ( τ , σ ) d τ
Define the state tracking error as x ˜ ( t , σ ) = ( Δ x d ( t ) x ( t , σ ) ) R N n , and according to the sate solution (31), the state tracking error is derived as
x ˜ ( t , σ ) = x ˜ ( t p 1 , σ ) + 1 Γ ( α ) t p 1 t p ( t τ ) α 1 f p ( x d ( τ ) , τ ) f p ( x ( τ , σ ) , τ ) d τ + 1 Γ ( α ) t p 1 t ( t τ ) α 1 B p u ˜ ( τ , σ ) d τ
Taking norms on both sides of (32) and applying the Lipschitz condition (i.e., Assumption 2), we have
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) + 1 Γ ( α ) t p 1 t ( t τ ) α 1 f p ( x d ( τ ) , τ ) f p ( x ( τ , σ ) , τ ) d τ + 1 Γ ( α ) t p 1 t ( t τ ) α 1 B p u ˜ ( τ , σ ) d τ x ˜ ( t p 1 , σ ) + L f Γ ( α ) t p 1 t ( t τ ) α 1 x ˜ ( τ , σ ) d τ + B p Γ ( α ) t p 1 t ( t τ ) α 1 u ˜ ( τ , σ ) d τ
Next, according to the Definition 3 (i.e., λ -norm), we have
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) + L f Γ ( α ) t p 1 t ( t τ ) α 1 x ˜ ( τ , σ ) d τ + B p Γ ( α ) u ˜ ( · , σ ) λ | [ t p 1 , t p ] t p 1 t ( t τ ) α 1 e λ τ d τ ,
In (34), Let w = t τ , then τ = t w and d τ = d w . The integral calculation proceeds as
t p 1 t ( t τ ) α 1 e λ τ d τ = e λ t 0 t t p 1 w α 1 e λ w d w = e λ t λ α 0 λ ( t t p 1 ) s α 1 e s d s < e λ t λ α 0 s α 1 e s d s = e λ t λ α Γ ( α ) .
substituting (35) into (33), we have
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) + L f Γ ( α ) t p 1 t ( t τ ) α 1 x ˜ ( τ , σ ) d τ + b B e λ t λ α u ˜ ( · , σ ) λ | [ t p 1 , t p ]
where b B = max p M B p . In (36), we denote a ( t ) = L f Γ ( α ) ( t τ ) α 1 . By applying the Gronwall-Bellman Lemma to (36), the explicit bound for the state error is derived as
x ˜ ( t , σ ) = x ˜ ( t p 1 , σ ) e t p 1 t a ( τ ) d τ + b B λ α e t p 1 t a ( τ ) d τ e λ t u ˜ ( · , σ ) λ | [ t p 1 , t p ]
Next, we define the input error u ˜ ( t , σ ) = ( 1 N u d ( t ) u ( t , σ ) ) R N q . By combining the compact form of the D α -type ILC protocol (28), the input error u ˜ ( t , σ + 1 ) is derived as
u ˜ ( t , σ + 1 ) = Δ u d ( t ) u ( t , σ ) + u ( t , σ ) u ( t , σ + 1 ) = u ˜ ( t , σ ) u ( t , σ + 1 ) u ( t , σ ) = u ˜ ( t , σ ) Γ p ( L p + H p ) I m D t α e ( t , σ + 1 ) = u ˜ ( t , σ ) Γ p ( L p + H p ) I m C p f ˜ p ( x ( t , σ + 1 ) , t ) + B p u ˜ ( t , σ + 1 )
where f ˜ p = f p ( x d ( t ) , t ) f p ( x ( t , σ ) , t ) R N n . Rearranging the terms in (38), it gets
[ I + Γ p ( L p + H p ) I m C p B p ] u ˜ ( t , σ ) = u ˜ ( t , σ ) Γ p ( L p + H p ) I m C p f ˜ p ( x ( t , σ + 1 ) , t )
Taking norms on both sides of (39), we have
u ˜ ( t , σ + 1 ) u ˜ ( t , σ ) + h x ˜ ( t , σ + 1 )
where h = max p L f Γ p ( L p + H p ) I m C p / [ I + Γ p ( L p + H p ) I m C p B p ] 1 . Substituting (37) into the right hand of (40), it yields
u ˜ ( t , σ + 1 ) [ I + Γ p ( L p + H p ) I m C p B p ] 1 u ˜ ( t , σ ) + h x ˜ ( t p 1 , σ + 1 ) e t p 1 t a ( τ ) d τ + b B h λ α e t p 1 t a ( τ ) d τ e λ t u ˜ ( · , σ ) λ | [ t p 1 , t p ]
Equation (41) is multiplied by e λ t , after which the λ -norms are taken on both sides to derive
u ˜ ( t , σ + 1 ) λ | [ t p 1 , t p ] [ I + Γ p ( L p + H p ) I m C p B p ] 1 + b B h λ α e t p 1 t a ( τ ) d τ u ˜ ( t , σ ) λ | [ t p 1 , t p ] + h e λ t x ˜ ( t p 1 , σ + 1 ) e t p 1 t a ( τ ) d τ
The rest of this proof is given as follows:
Step 1: At the first switching interval [ 0 , t 1 ) , the first subsystem is activated, i.e., s ( j ) ( t ) = 1 . Thus, (42) is written as
u ˜ ( t , σ + 1 ) λ | [ 0 , t 1 ] [ I + Γ 1 ( L 1 + H 1 ) I m C 1 B 1 ] 1 + O ( λ 1 ) u ˜ ( t , σ ) λ | [ 0 , t 1 ] + h e λ t x ˜ ( 0 , σ + 1 ) e 0 t a ( τ ) d τ
where O ( λ 1 ) = b B h λ α e 0 t a ( τ ) d τ is the terms proportional to 1 λ α . According to results in Theorem 1, we know that lim σ + e ( 0 , σ ) = lim σ + C 1 x ˜ ( 0 , σ ) = 0 . Thus, there is lim σ + x ˜ ( 0 , σ + 1 ) = 0 . On the other hand, if a sufficient large λ is chosen, the fact I + Γ 1 ( L 1 + H 1 ) I m C 1 B 1 1 + O ( λ 1 ) ρ ˜ 1 < 1 can be validated according the convergence condition (29). Hence, we can conclude that
u ˜ ( t , σ + 1 ) λ | [ 0 , t 1 ] ρ ˜ 1 u ˜ ( t , σ ) λ | [ 0 , t 1 ] + g 1 x ˜ ( 0 , σ + 1 )
where g 1 = h e λ t e 0 t a ( τ ) d τ . By applying the Lemma 3 to (44), we have lim σ + u ˜ ( · , σ + 1 ) λ | [ 0 , t 1 ] = 0 . Furthermore, according to (37), we have
x ˜ ( t , σ ) x ˜ ( 0 , σ ) e 0 t a ( τ ) d τ + b B λ α e 0 t a ( τ ) d τ e λ t u ˜ ( · , σ ) λ | [ 0 , t 1 ]
Thus, taking the limit on both sides of (45), we have
lim σ + x ˜ ( t , σ ) lim σ + x ˜ ( 0 , σ ) e 0 t a ( τ ) d τ + b B λ α e 0 t a ( τ ) d τ e λ t lim σ + u ˜ ( · , σ ) λ | [ 0 , t 1 ] = 0 , t [ 0 , t 1 ) ,
Based on the results in (46), it can be concluded that
lim σ + e ( t , σ ) C 1 lim σ + x ˜ ( t , σ ) = 0 , t [ 0 , t 1 ) .
Step 2: At the second switching interval [ t 1 , t 2 ) , the second subsystems is activated, i.e., s ( j ) ( t ) = 2 . Thus, (42) is written as
u ˜ ( t , σ + 1 ) λ | [ t 1 , t 2 ] [ I + Γ 2 ( L 2 + H 2 ) I m C 2 B 2 ] 1 + O ( λ 1 ) u ˜ ( t , σ ) λ | [ t 1 , t 2 ] + h e λ t x ˜ ( t 1 , σ + 1 ) e t 1 t a ( τ ) d τ
where O ( λ 1 ) = b B h λ α e t 1 t a ( τ ) d τ . From the conclusion in (46), it gets lim σ + x ˜ ( t 1 , σ + 1 ) = lim σ + x ˜ ( t 1 , σ ) = 0 . Similarly, if a sufficient large λ is chosen, we have I + Γ 2 ( L 2 + H 2 ) I m C 2 B 2 1 + O ( λ 1 ) ρ ˜ 2 < 1 . Hence, by applying Lemma 3 to (48), we have lim σ + u ˜ ( · , σ + 1 ) λ | [ t 1 , t 2 ] = 0 . Similar to (45)–(47), it can be concluded that lim σ + e ( t , σ ) = 0 for all t [ t 1 , t 2 ) .
By repeating Step 2 on the successive switching intervals [ t 2 , t 3 ) , [ t 3 , t 4 ) ,..., [ t M , T ] , we have lim σ + e ( t , σ ) = 0 for all t [ 0 , T ] . □

3.3. Robustness Analysis

In practical applications, MASs are inevitably subject to external disturbances and non-repetitive initial uncertainties. In this part, the robustness of the proposed D α -type ILC scheme (27) against bounded external disturbances is investigated.
Consider the switched FONHMASs (5) with additive disturbances described by
D t α x ( j ) ( t , σ ) = f s ( j ) ( t ) ( x ( j ) ( t , σ ) , t ) + B s ( j ) ( t ) u ( j ) ( t , σ ) + ω ( j ) ( t , σ ) , y ( j ) ( t , σ ) = C s ( j ) ( t ) x ( j ) ( t , σ ) ,
where ω ( j ) ( t , σ ) R n represents the external disturbance acting on the j-th agent. The compact form of (49) is:
D t α x ( t , σ ) = f s ( t ) ( x ( t , σ ) , t ) + B s ( t ) u ( t , σ ) + ω ( t , σ ) , y ( t , σ ) = C s ( t ) x ( t , σ ) ,
where ω ( t , σ ) = [ ω ( 1 ) ( t , σ ) T , , ω ( N ) ( t , σ ) T ] T R N n .
Assumption 3. 
It is assumed that both the external disturbance and the initial state error remain within finite bounds. Mathematically, we can identify positive scalars b ω and b x ˜ satisfying ω ( t , σ ) b ω (for any t [ 0 , T ] ) and x ˜ ( 0 , σ ) b x ˜ .
Theorem 3. 
Under Assumptions 1 and 3, consider the switched FONHMASs (49) with bounded external disturbances. If the proposed D α -type ILC controller (28) is applied and the gain matrix Γ s ( t ) satisfies the convergence condition (29) given in Theorem 2, then the tracking error e ( t , σ ) remains uniformly bounded as the iteration number σ tends to infinity. Furthermore, if the disturbances vanish (i.e., b ω 0 and b x ˜ 0 ), the tracking error converges to zero.
Proof of Theorem 3. 
Similar to the derivation in Theorem 2, let x ˜ ( t , σ ) = Δ x d ( t ) x ( t , σ ) . Utilizing the equivalent Volterra integral equation for the disturbed system (50), the state tracking error dynamics for t [ t p 1 , t p ) are derived as:
x ˜ ( t , σ ) = x ˜ ( t p 1 , σ ) + 1 Γ ( α ) t p 1 t ( t τ ) α 1 f p ( x d ( τ ) , τ ) f p ( x ( τ , σ ) , τ ) d τ + 1 Γ ( α ) t p 1 t ( t τ ) α 1 B p u ˜ ( τ , σ ) ω ( τ , σ ) d τ .
Taking the norm of (51) and applying Assumption 1 (Lipschitz condition) and Assumption 3, we obtain
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) + L f Γ ( α ) t p 1 t ( t τ ) α 1 x ˜ ( τ , σ ) d τ + b B Γ ( α ) t p 1 t ( t τ ) α 1 u ˜ ( τ , σ ) d τ + b ω Γ ( α ) t p 1 t ( t τ ) α 1 d τ .
Specifically, noting that t p 1 t ( t τ ) α 1 d τ = ( t t p 1 ) α α T α α , and using the bound ( t τ ) α 1 e λ τ d τ λ α Γ ( α ) e λ t , the inequality (52) can be simplified to
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) + L f Γ ( α ) t p 1 t ( t τ ) α 1 x ˜ ( τ , σ ) d τ + b B e λ t λ α u ˜ ( · , σ ) λ | [ t p 1 , t p ] + b ω T α Γ ( α + 1 ) ,
By applying the Gronwall-Bellman Lemma to (53), an explicit bound for the state error is obtained, i.e.,
x ˜ ( t , σ ) x ˜ ( t p 1 , σ ) e t p 1 t a ( τ ) d τ + b B λ α e t p 1 t a ( τ ) d τ e λ t u ˜ ( · , σ ) λ | [ t p 1 , t p ] + b ω T α Γ ( α + 1 ) e t p 1 t a ( τ ) d τ ,
Next, consider the input error dynamics. Following the same derivation as in (38)–(41) but including the disturbance, the input error inequality (41) becomes
u ˜ ( t , σ + 1 ) λ | [ t p 1 , t p ] [ I + Γ p ( L p + H p ) I m C p B p ] 1 + b B h λ α e t p 1 t a ( τ ) d τ u ˜ ( t , σ ) λ | [ t p 1 , t p ] + h e λ t x ˜ ( t p 1 , σ + 1 ) e t p 1 t a ( τ ) d τ + h b ω T α Γ ( α + 1 ) e t p 1 t a ( τ ) d τ ,
Similarly, we proceed by the robustness analysis on the switching intervals.
Step 1: For the first interval [ 0 , t 1 ) , i.e., s ( t ) = 1 , and (55) is denoted by
u ˜ ( t , σ + 1 ) λ | [ 0 , t 1 ] [ I + Γ 1 ( L 1 + H 1 ) I m C 1 B 1 ] 1 + O ( λ 1 ) u ˜ ( t , σ ) λ | [ 0 , t 1 ] + h e λ t x ˜ ( 0 , σ + 1 ) e 0 t a ( τ ) d τ + h b ω T α Γ ( α + 1 ) e 0 t a ( τ ) d τ ,
It is noted that the initial state error is bounded by Assumption 3, i.e., x ˜ ( 0 , σ + 1 ) b x ˜ . Substituting this into (56) to obtain
u ˜ ( t , σ + 1 ) λ | [ 0 , t 1 ] [ I + Γ 1 ( L 1 + H 1 ) I m C 1 B 1 ] 1 + O ( λ 1 ) u ˜ ( t , σ ) λ | [ 0 , t 1 ] + d 1 ,
where d 1 = h e λ t b x ˜ e 0 t a ( τ ) d τ + h b ω T α Γ ( α + 1 ) e 0 t a ( τ ) d τ . According to the convergence condition (29) and we can choose a sufficient large λ to make
[ I + Γ 1 ( L 1 + H 1 ) I m C 1 B 1 ] 1 + O ( λ 1 ) ρ ^ 1 < 1 ,
and using the Lemma 3, we have
lim sup σ u ˜ ( · , σ ) λ | [ 0 , t 1 ] d 1 1 ρ ^ 1 .
Since d 1 is a linear combination of b x ˜ and b ω , the input error is bounded. Consequently, by substituting this back into the state Equation (54), the tracking error e ( t , σ ) is also bounded for t [ 0 , t 1 ) , i.e.,
lim sup σ e ( t , σ ) b C lim sup σ x ˜ ( t , σ ) b C b x ˜ e t p 1 t a ( τ ) d τ + b B d 1 e λ t λ α ( 1 ρ ^ 1 ) e 0 t a ( τ ) d τ + b ω T α Γ ( α + 1 ) e 0 t a ( τ ) d τ ,
where b C = max p M { C p } . From (59), if b ω 0 and b x ˜ 0 , we have d 1 0 , and lim sup σ e ( t , σ ) = 0 in further for all t [ 0 , t 1 ) .
Step 2: For the subsequent intervals [ t p 1 , t p ) , the continuity of states implies that the initial error x ˜ ( t p 1 , σ ) is determined by the final error of the previous interval. Since the error in the previous interval is proven to be bounded, x ˜ ( t p 1 , σ ) is bounded. Recursively applying the logic from Step 1, the input and tracking errors remain bounded for all switching intervals.
Therefore, the bipartite consensus error e ( t , σ ) is uniformly bounded over [ 0 , T ] . Finally, it is evident from the expression of d 1 that if the disturbance bounds b ω 0 and initial error bounds b x ˜ 0 , then the asymptotic error bound of e ( t , σ ) approaches zero, recovering the result of Theorem 3. □
For clarity, we summarize the key functions of Theorems 1–3 in Table 2.
Remark 3. 
The proposed distributed D α -type ILC protocol (27) for the switched FONHMASs (5) can satisfy practical applications with low computational complexity. According to the definition of the bipartite consensus error (15) and the D α -type ILC updating law (27), the calculation involves communicating with neighbors and computing fractional-order derivatives. Specifically, obtaining the consensus error (15) for N agents involves O ( N 2 m ) addition and multiplication operations per time step. The controller update (27) involves O ( N q m ) operations for gain matrix multiplication and O ( N m · k ) operations for the numerical approximation of the fractional derivative D t α at the k-th time instant. Therefore, the time complexity of the distributed D α -type ILC protocol is O ( N T s a m ( N m + q m ) + N T s a m 2 m ) for an individual iteration, where T s a m denotes the total sampling time instants. On the other hand, the space complexities require storing the trajectories of states, inputs, outputs, and historical error data over the interval [ 0 , T ] . The space complexities of states x ( j ) ( t , σ ) , inputs u ( j ) ( t , σ ) , and outputs y ( j ) ( t , σ ) are O ( N T s a m n ) , O ( N T s a m q ) , and O ( N T s a m m ) , respectively. Additionally, the initial state learning mechanism (20) requires a space complexity of O ( N n ) . Thus, the total space complexity of the proposed method is O ( N T s a m ( n + m + q ) ) for an individual iteration.

4. Simulation Examples

In this section, a FONHMAS consisting of ten follower agents and one leader is employed to illustrate the effectiveness of the proposed D α -type ILC controller. The communication typologies are denoted by Figure 3.
The dynamics of agents are defied as
Agents 1 , 4 , 7 , 10 :
Subsystem Π ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.2 0 0.25 0.1 sin ( x ( j ) ( t , σ ) ) + 0.1 0.3 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.2 1 x ( j ) ( t , σ ) .
Subsystem Ψ ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.4 0 0.15 0.1 cos ( x ( j ) ( t , σ ) ) + 0.1 0.4 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.3 1 x ( j ) ( t , σ ) .
Agents 2 , 5 , 8 :
Subsystem Π ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.3 0 0.45 0.1 cos ( x ( j ) ( t , σ ) ) + 0.1 0.3 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.4 1 x ( j ) ( t , σ ) .
Subsystem Ψ ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.5 0.2 0.5 0.7 sin ( x ( j ) ( t , σ ) ) + 0.2 0.4 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.8 0.2 x ( j ) ( t , σ ) .
Agents 3 , 6 , 9 :
Subsystem Π ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.4 0 0.25 0.1 sin ( x ( j ) ( t , σ ) ) + 0.1 1 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.4 0.7 x ( j ) ( t , σ ) .
Subsystem Ψ ( j ) :
D t 0.5 x ( j ) ( t , σ ) = 0.2 0 0.45 0.1 cos ( x ( j ) ( t , σ ) ) + 0.1 0.4 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.2 1 x ( j ) ( t , σ ) .
According to the communication network presented in Figure 3, it shows that agent 0 is the leader agent, and agents 1–10 are the follower agents, with each agent having two switching sub-modes. It is also noted that V 1 = { 3 , 4 , 6 , 7 , 9 , 10 } and V 2 = { 1 , 2 , 5 , 8 } from Figure 3.
The state-space model of the first subsystem for agents 1, 4, 7, and 10 is represented by Π ( 1 ) , and the second subsystem is represented by Ψ ( 1 ) from the previous subsection. The state-space model of the first subsystem for Agents 2, 5, and 8 is represented by Π ( 2 ) , and the second subsystem is represented by Ψ ( 2 ) from the previous subsection. The state-space model of the first subsystem for Agents 3, 6, and 9 is represented by Π ( 3 ) , and the second subsystem is represented by Ψ ( 3 ) from the previous subsection.
In the simulation process, the total number of iterations is set to σ m a x = 250 , and the task time period is T = 2 s. The gain matrices are chosen as H ¯ 1 = 0.85 I 10 and H ¯ 2 = 1.65 I 10 .
The initial states of all agents are set as
x ( 1 ) ( 0 , 0 ) = [ 0.2 , 0.1 ] T , x ( 2 ) ( 0 , 0 ) = [ 0.14 , 0.1 ] T , x ( 3 ) ( 0 , 0 ) = [ 0.2 , 0.14 ] T , x ( 4 ) ( 0 , 0 ) = [ 0.1 , 0.1 ] T , x ( 5 ) ( 0 , 0 ) = [ 0.2 , 0.05 ] T , x ( 6 ) ( 0 , 0 ) = [ 0.2 , 0.1 ] T , x ( 7 ) ( 0 , 0 ) = [ 0.13 , 0.1 ] T , x ( 8 ) ( 0 , 0 ) = [ 0.1 , 0.2 ] T , x ( 9 ) ( 0 , 0 ) = [ 0.1 , 0.13 ] T , x ( 10 ) ( 0 , 0 ) = [ 0.2 , 0.14 ] T .
The initial iterative output is set to u ( j ) ( t , 0 ) = 0 m , for ( j = 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 ) . The simulation results are presented as follows.

4.1. Example I: Convergence Verification

This part validates the convergence of the developed fractional-order distributed Dα ILC protocols. The desired trajectory is generated by the leader agent 0, and defined as y d ( t ) = 1.5 sin ( 2 π t ) for t [ 0 , T ] . The switching signals in this example are illustrated in Figure 4.
From the switching signal in Figure 4, and the switching communication topology in Figure 3a,b, the switching scenarios for the dynamics and communication topology of the switched FONHMASs are illustrated as follows.
Case 1: When the switching signal σ ( j ) ( t ) = 1 , the first subsystem of agents 1–10 is activated, and the communication network is G ¯ 1 . The learning gain is chosen as Γ 1 = 1.25 I 20 × 10 .
Case 2: When the switching signal σ ( j ) ( t ) = 2 , the second subsystem of agents 1–10 is activated, and the communication network is G ¯ 2 . The learning gain is chosen as Γ 2 = 0.85 I 20 × 10 .
In addition, the gain matrix Ξ in (23) is set as 0.5 I 10 × 10 . Specifically, the learning gain matrices Ξ and Γ p are selected such that the spectral radius requirements in (21) and (29) are strictly satisfied for all active sub-systems. Figure 5 displays the output trajectories during the initial iteration ( σ = 1 ), showing significant consensus errors due to the non-repetitive initial states and unlearned control inputs. Figure 6 displays the output trajectories at the 100-th iteration. As the iteration number increases, the bipartite tracking performance improves substantially. After the 250 iterations, the bipartite consensus objective is perfectly realized according the output profiles in Figure 7. To be specific, agents in Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } successfully converge to y d ( t ) , while agents in Φ 2 = { 1 , 2 , 5 , 8 } converge to the opposite trajectory y d ( t ) . The super-norm of the tracking errors are shown in Figure 8, which exhibits a strictly monotonic decreasing trend. These results validate the Theorems 1 and 2, confirming that the proposed Dα ILC law effectively handles switching dynamics and non-repetitive initial states to achieve bipartite consensus tracking of the switched FONHMASs.
Remark 4. 
It is worth noting that the selection of the learning gain matrices Ξ and Γ s ( t ) in this simulation is strictly constrained by the convergence conditions (21) and (29). To guarantee the convergence properties, these gains must be tuned such that the spectral conditions in (21) and (29) are satisfied. Specifically, based on the chosen parameters in Example I, the norms of the critical matrices for the initial state learning (20) and the Dα-type ILC protocol (27) are calculated as follows:
| | [ I + C 1 B 1 Ξ ( ( L 1 + H 1 ) I m ) ] 1 | | 0.9973 < 1 ,
and
| | [ I + Γ 1 ( ( L 1 + H 1 ) I m ) C 1 B 1 ] 1 | | 0.9195 < 1 .
Both values are strictly less than 1, which theoretically validates that the selected gains satisfy the contraction mapping requirement and ensures the asymptotic convergence of the tracking errors.

4.2. Example II: Robustness Verification

The FONHMAS subjected to non-repetitive external disturbances is formulated as (49), and the robustness of the ILC protocol (27) is investigated in this part. The non-repetitive external disturbances is given by ω ( j ) ( t , σ ) = 0.1 cos ( t ) for j = 1 , 3 , 5 , 7 , 9 , and 0.1 sin ( t ) for j = 2 , 4 , 6 , 8 , 10 . The leader agent’s trajectory is set as y d ( t ) = 1.5 ( sin ( ( 2 π / 3 ) t ) + cos ( 5 π t ) 1 ) in this example.
The simulation results are demonstrated in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. Figure 9 presents the switching signals for this example, and other parametes are set as same as the previous example. The initial output performances at σ = 1 are shown in Figure 10. At the 100-th iteration (Figure 11), the ILC law is shown to actively suppress time-varying disturbances. After 300 iterations, despite the persistent external disturbance, the agents maintain a stable bipartite consensus configuration with high precision according to Figure 12. As illustrated in Figure 13, while the super-nor of tracking errors remain within a sufficiently small, and it can be note that the bipartite tracking errors of agents are bounded in the neighborhood of zero. This result confirms that the proposed D α -type ILC controller ensures that the bipartite consensus errors are uniformly bounded along the iteration direction under non-repetitive external disturbances.

4.3. Example III: Comparative Analysis

To verify the scalability of the proposed bipartite iterative consensus control scheme regarding agents count, fractional orders, switching complexity, and robustness against different-type disturbances, we introduce two additional agents and increase the frequency of the switching signal. First, the dynamics of agent 11 and 12 are given as follows.
Agent 11, 12:
Subsystem Π ( j ) :
D t 0.7 x ( j ) ( t , σ ) = 0.6 0.3 0.2 0.2 cos ( x ( j ) ( t , σ ) ) + 0.8 0.4 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.2 0.8 x ( j ) ( t , σ ) .
Subsystem Ψ ( j ) :
D t 0.7 x ( j ) ( t , σ ) = 0.8 0.3 0.9 0.5 cos ( x ( j ) ( t , σ ) ) + 0.1 0.3 u ( j ) ( t , σ ) , y ( j ) ( t , σ ) = 0.2 0.9 x ( j ) ( t , σ ) .
Unlike Examples I and II, the fractional order for all agents in this part is set as α = 0.7 . The communication topology for all agents switches between the two graphs depicted in Figure 14, while the system dynamics switch between subsystems Π j and Ψ j . The corresponding switching signal is illustrated in Figure 15. As observed, compared to the switching signal in Figure 15, the switching signal in Figure 15 exhibits a higher switching frequency and shorter dwell times for the switching subsystems. The parameters for the Dα-type ILC controller (27) with the initial state learning mechanism (20) remain consistent with those used in Example I. The initial states for Agents 11 and 12 are initialized as x ( 11 ) ( 0 , 0 ) = [ 0.3 , 0.25 ] T and x ( 12 ) ( 0 , 0 ) = [ 0.15 , 0.3 ] T , respectively. The leader’s reference trajectory is given by y d = 1.7 sin ( 6 π t ) , t [ 0 , 2 ] .
The bipartite consensus tracking results for all agents are presented in Figure 16. As depicted, agents belonging to the subgroup Φ 1 successfully track the desired trajectory y d , whereas agents in Φ 2 track the anti-phase trajectory y d with high precision. Figure 17 illustrates the comparative experimental results between the proposed Dα-type ILC and several other methods. It is worth noting that control strategies for such FONHMASs (5) are rarely reported in existing literature, thus we compare our approach with a fractional-order proportion-integration-differentiation (PID) method used for nonlinear FOMASs in [51]. Furthermore, comparisons are made against an Dα-type ILC method without the initial state learning mechanism (20), as well as an integer-order ILC, i.e., P-type ILC. The comprehensive comparison results are displayed in Figure 17a,b. Figure 17a indicates that, compared to the integer-order ILC, the proposed Dα-type ILC exhibits a faster convergence rate along the iteration axis and achieves a smaller convergence error, which defined by RMSE = j | e ( j ) ( t , σ ) | 2 . Moreover, compared to the Dα-type ILC without (20), the proposed scheme effectively mitigates the deviation between the actual and desired initial states, thereby achieving a lower convergence error. Figure 17b presents the time-domain RMSE comparison of the three ILC methods at the final iteration and the fractional-order PID method. It is evident that since the fractional-order PID approach does not account for the switching nature of the system, it suffers from significant tracking errors at switching instants. In contrast, the proposed ILC-based method, which explicitly incorporates the switching mechanism, yields a significantly smaller tracking error. This conclusion is further corroborated by the data in Table 3, where MRMSE denotes the maximum RMSE over the time domain, and ARMSE represents the average RMSE. As indicated in Table 3, the proposed method equipped with the initial state learning mechanism achieves the lowest tracking error among other approaches. In addition, as shown by the comparison of convergence iterations in Table 4, the proposed Dα-type ILC method exhibits a faster iterative convergence speed.
Furthermore, the validation of the system’s robustness is provided in Figure 18. Figure 18a investigates the impact of disturbance magnitude by setting ω ( j ) ( t , σ ) = cos t with amplitudes of 0.1 , 1.0 , and 1.5 . Remarkably, even with different amplitudes, the RMSE remain consistently low, highlighting the resilience of the Dα-type ILC method to varying disturbance strengths. Furthermore, Figure 18b extends this analysis to different disturbance types, comparing the RMSE under trigonometric signal (i.e., ω ( j ) ( t , σ ) = cos t ), stochastic signal (i.e., ω ( j ) ( t , σ ) = δ ( t ) , δ ( t ) N ( 0 , 1 ) ), and impulsive signal (i.e., ω ( j ) ( t , σ ) = 1 + sign sin 2 π t T 2 ). In all cases, the Dα-type ILC method maintains precise tracking performance with minimal errors, thereby confirming its robustness against diverse environmental disturbances. Also, the MRMSE and ARMSE of different disturbances are shown Table 5, and it can be seen that all errors remain consistently low against different-type disturbances.

4.4. Example IV: Practical Validation

To verify the effectiveness of the proposed ILC method for practical applications, we conduct a simulation experiment based on the fractional-order multi-motor system described in [52]. Based on the FONHMAS (5), the fractional-order model of the motors are formulated as with x ( j ) ( t , σ ) = [ ω p ( j ) ω d ( j ) 0.1 ν p ( j ) 0.1 ν d ( j ) ] T R 3 , f s ( j ) ( t ) ( x ( j ) ( t , σ ) ) = A ˜ s ( j ) ( t ) x ( j ) ( t , σ ) + ϕ s ( j ) ( t ) ( x ( j ) ( t , σ ) ) R 3 , y ( j ) ( t , σ ) = ω p ( j ) R , and u ( j ) ( t , σ ) R . ϕ σ ( j ) ( t ) ( x ( j ) ( t , σ ) ) R 3 is a nonlinear function and the system matrices are
A ˜ s ( j ) ( t ) = ( K p ( j ) ) 2 J p ( j ) R p ( j ) 0 K p d ( j ) J p ( j ) 0 ( K d ( j ) ) 2 J d ( j ) R d ( j ) K p d ( j ) J d ( j ) 0.1 0.1 0 , B s ( j ) ( t ) = K p ( j ) J p ( j ) R p ( j ) 0 0 , C s ( j ) ( t ) = 1 0 0 ,
where j = 1 , 2 , 3 , 4 and other definitions of symbols are defined in Table 1 of reference [30]. The fractional order is set to α = 0.1 . Furthermore, the specific system parameters are given in the Example II of references [30]. The communication topology for the multi-motor system is illustrated in Figure 19. Each motor operates with two distinct subsystems. Both the subsystem dynamics and the communication topology undergo repetitive switching governed by the switching signal depicted in Figure 20. The initial states are initialized as x ( j ) ( 0 , 0 ) = 0 3 . The control gain for the Dα-type ILC controller is set to H ¯ = 0.35 I 3 for the first subsystem and H ¯ = 0.41 I 3 for the second subsystem. The desired reference trajectory is defined as y d ( t ) = 1.5 ( sin ( 2 π t / 3 ) + cos ( 6 π t ) 1 ) , t [ 0 , 2 ] .
Figure 21a,b demonstrate the output velocity trajectories of the four motors at the 1-st and 15-th iterations, respectively. It is evident that as the number of iterations increases, Motors 1 and 4 gradually converge to the desired trajectory y d , while Motors 2 and 3 approach the anti-phase trajectory y d . The final trajectories after 40 iterations are plotted in Figure 22a. Meanwhile, Figure 22b illustrates the evolution of the bipartite consensus tracking error along the iteration axis. These results confirm that the proposed ILC method successfully enables the four motors to achieve precise bipartite consensus tracking. Furthermore, similar to the analysis in Example III, a comparative study with other methods is conducted. The comparison results, presented in Figure 23a, demonstrate that the proposed D-type ILC method yields significantly lower tracking errors compared to both the integer-order ILC and the fractional-order PID controllers. The specific numerical values for MRMSE and ARMSE are detailed in Table 3. Regarding robustness against various disturbance signals, the corresponding RMSE results are depicted in Figure 23b. The comparison of the number of iterations required for convergence, as presented in Table 4, clearly demonstrates that the proposed method achieves significantly faster convergence. Both the RMSE curves in the figure and the statistical data in Table 5 indicate that the D-type ILC method effectively suppresses the adverse effects of external disturbances, thereby exhibiting strong robustness. Consequently, this section validates the effectiveness of the proposed ILC approach in achieving bipartite consensus control for practical motor systems, while also confirming its superior performance in the presence of diverse disturbances.
In summary, the simulation results in above four examples demonstrate that the proposed fractional-order distributed Dα ILC scheme is effective for the bipartite consensus tracking of switched FONHMASs and robustness for different disturbances with the convergence conditions derived in Theorems 1, 2, and 3 being satisfied. Therefore, the ILC method proposed in this paper effectively achieve the bipartite consensus tracking for the switched FONHMASs subject to switched dynamics, switching communication network, non-repetitive initial states, and external disturbances, validating its reliability for coordinated control in complex, multi-task FONHMASs.

5. Conclusions

This paper has investigated the bipartite consensus tracking problem for switched FONHMASs subject to cooperative and antagonistic interactions, and non-repetitive external disturbances. By developing a distributed D α -type ILC protocol combined with an initial state learning mechanism, the complex coupling problems caused by switching topologies and fractional-order dynamics are effectively resolved. Theoretical analysis, simulation results on numerical simulations and multi-motor systems simulation demonstrate that the proposed method possesses strong robustness while ensuring error convergence. Distinct from centralized control architectures, the proposed distributed D α -type ILC protocol framework relies solely on local neighbor information interaction, which significantly reduces the communication burden. Furthermore, the incorporation of fractional-order dynamics enables a more precise mathematical description of complex physical systems compared to integer-order models, thereby substantially improving the practical applicability of the proposed scheme.
Despite the theoretical and simulation verifications, this study has certain limitations. First, the current framework assumes a secure communication environment and does not account for the impact of cyber-attacks (e.g., Denial-of-Service attacks), which are increasingly critical in networked control systems. Second, the reliance on time-triggered communication implies that data is transmitted continuously regardless of the system state, which may lead to low communication efficiency and redundant data transmission. Motivated by these limitations, future research will focus on: (1) integrating event-triggered mechanisms into the ILC framework to screen out unnecessary data packets, thereby further reducing the communication burden and improving communication efficiency; (2) exploring novel Lyapunov functionals to relax the theoretical constraints imposed by the λ -norm and further generalize the control framework; (3) considering combining the initial state learning mechanism with state observers or input-compensation strategies to relax the requirement of the actual states in this paper, and (4) validating the effectiveness and robustness of the proposed algorithm on actual physical systems to bridge the gap between theoretical analysis and real-world engineering applications.

Author Contributions

Conceptualization, S.Y. and S.C.; methodology, S.Y.; validation, S.Y. and S.C.; writing—original draft preparation, S.Y.; writing—review and editing, S.C.; project administration, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62403202; in part by the Postdoctoral Fellowship Program of CPSF under Grant GZC20240501.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, X.; Zhao, W.; Yuan, J.; Chen, T.; Zhang, C.; Wang, L. Distributed optimization for fractional-order multi-agent systems based on adaptive backstepping dynamic surface control technology. Fractal Fract. 2022, 6, 642. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Mou, Z.; Gao, F.; Jiang, J.; Ding, R.; Han, Z. UAV-enabled secure communications by multi-agent deep reinforcement learning. IEEE Trans. Veh. Technol. 2020, 69, 11599–11611. [Google Scholar] [CrossRef]
  3. Xia, Z.; Du, J.; Wang, J.; Jiang, C.; Ren, Y.; Li, G.; Han, Z. Multi-agent reinforcement learning aided intelligent UAV swarm for target tracking. IEEE Trans. Veh. Technol. 2021, 71, 931–945. [Google Scholar] [CrossRef]
  4. Saraiva, F.d.O.; Bernardes, W.M.; Asada, E.N. A framework for classification of non-linear loads in smart grids using artificial neural networks and multi-agent systems. Neurocomputing 2015, 170, 328–338. [Google Scholar] [CrossRef]
  5. Gan, W.; Qiao, L. Many-Versus-Many UUV Attack-Defense Game in 3D Scenarios Using Hierarchical Multi-Agent Reinforcement Learning. IEEE Internet Things J. 2025, 12, 23479–23494. [Google Scholar] [CrossRef]
  6. Ouyang, W.; Mu, J.; Jing, X.; Wang, Y. Efficient vehicle recognition and tracking for UAV-enabled intelligent transport systems: A multi-agent reinforcement learning method. IEEE Trans. Intell. Transp. Syst. 2025, 26, 20930–20940. [Google Scholar] [CrossRef]
  7. Wang, G.; Wang, R.; Yi, D.; Zhou, X.; Zhang, S. Iterative learning formation control via input sharing for fractional-order singular multi-agent systems with local lipschitz nonlinearity. Fractal Fract. 2024, 8, 347. [Google Scholar] [CrossRef]
  8. Zhu, J.; Lu, C.; Li, J.; Wang, F.Y. Secure consensus control on multi-agent systems based on improved PBFT and raft blockchain consensus algorithms. IEEE/CAA J. Autom. Sin. 2025, 12, 1407–1417. [Google Scholar] [CrossRef]
  9. Hackos, J. Centralized versus distributed organizational structures. In Proceedings of the Annual Conference-Society for Technical Communication; Society for Technical Communication: Fairfax, VA, USA, 1999; Volume 46, p. 106. [Google Scholar]
  10. Hajar, K.; Hably, A.; Bacha, S.; Elrafhi, A.; Obeid, Z. Optimal centralized control application on microgrids. In 2016 3rd International Conference on Renewable Energies for Developing Countries (REDEC); IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  11. Diaz, N.L.; Luna, A.C.; Vasquez, J.C.; Guerrero, J.M. Centralized control architecture for coordination of distributed renewable generation and energy storage in islanded AC microgrids. IEEE Trans. Power Electron. 2016, 32, 5202–5213. [Google Scholar] [CrossRef]
  12. Oumbé Tékam, G.; Kitio Kwuimy, C.; Woafo, P. Analysis of tristable energy harvesting system having fractional order viscoelastic material. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25, 013112. [Google Scholar] [CrossRef]
  13. Shah, Z.M.; Khanday, F.A. Analysis of disordered dynamics in polymer nanocomposite dielectrics for the realization of fractional-order capacitor. IEEE Trans. Dielectr. Electr. Insul. 2021, 28, 266–273. [Google Scholar] [CrossRef]
  14. Machado, J.T.; Jesus, I.S.; Galhano, A.; Cunha, J.B. Fractional order electromagnetics. Signal Process. 2006, 86, 2637–2644. [Google Scholar] [CrossRef]
  15. Wen, X.J.; Wu, Z.M.; Lu, J.G. Stability analysis of a class of nonlinear fractional-order systems. IEEE Trans. Circuits Syst. II Express Briefs 2008, 55, 1178–1182. [Google Scholar] [CrossRef]
  16. Hu, F.; Zhu, W.Q. Stabilization of quasi integrable hamiltonian systems with fractional derivative damping by using fractional optimal control. IEEE Trans. Autom. Control 2013, 58, 2968–2973. [Google Scholar] [CrossRef]
  17. Huang, C.; Liu, H.; Huang, T.; Cao, J. Bifurcations due to different neutral delays in a fractional-order neutral-type neural network. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 563–575. [Google Scholar] [CrossRef]
  18. Yu, W.; Li, Y.; Wen, G.; Yu, X.; Cao, J. Observer design for tracking consensus in second-order multi-agent systems: Fractional order less than two. IEEE Trans. Autom. Control 2016, 62, 894–900. [Google Scholar] [CrossRef]
  19. Yang, H.Y.; Yang, Y.; Han, F.; Zhao, M.; Guo, L. Containment control of heterogeneous fractional-order multi-agent systems. J. Frankl. Inst. 2019, 356, 752–765. [Google Scholar] [CrossRef]
  20. Wang, L.; Zhang, G. Event-triggered iterative learning control for perfect consensus tracking of non-identical fractional order multi-agent systems. Int. J. Control Autom. Syst. 2021, 19, 1426–1442. [Google Scholar] [CrossRef]
  21. Yan, X.; Li, K.; Yang, C.; Zhuang, J.; Cao, J. Consensus of fractional-order multi-agent systems via observer-based boundary control. IEEE Trans. Netw. Sci. Eng. 2024, 11, 3370–3382. [Google Scholar] [CrossRef]
  22. Chandra, R. Competition and collaboration in cooperative coevolution of elman recurrent neural networks for time-series prediction. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3123–3136. [Google Scholar] [CrossRef]
  23. Tao, W.; Liu, H.; Xu, J.; Dai, Q.; Zhou, J.; Wen, H.; Chen, Z. Collaboration or competition: An infomax-based period-aware transformer for ticket-grabbing prediction. IEEE Trans. Intell. Transp. Syst. 2024, 25, 19757–19769. [Google Scholar] [CrossRef]
  24. Meng, D. Dynamic distributed control for networks with cooperative–antagonistic interactions. IEEE Trans. Autom. Control 2017, 63, 2311–2326. [Google Scholar] [CrossRef]
  25. Gong, X.; Cui, Y.; Shen, J.; Shu, Z.; Huang, T. Distributed prescribed-time interval bipartite consensus of multi-agent systems on directed graphs: Theory and experiment. IEEE Trans. Netw. Sci. Eng. 2020, 8, 613–624. [Google Scholar] [CrossRef]
  26. Chen, X.; Yu, H.; Hao, F. Prescribed-time event-triggered bipartite consensus of multiagent systems. IEEE Trans. Cybern. 2020, 52, 2589–2598. [Google Scholar] [CrossRef]
  27. Xing, M.; Lu, J.; Liu, Y.; Chen, X. Event-based bipartite consensus of multi-agent systems subject to DoS attacks. IEEE Trans. Netw. Sci. Eng. 2022, 10, 68–80. [Google Scholar] [CrossRef]
  28. Li, Z.; Zhao, J. Adaptive consensus of non-strict feedback switched multi-agent systems with input saturations. IEEE/CAA J. Autom. Sin. 2021, 8, 1752–1761. [Google Scholar] [CrossRef]
  29. Zhang, S.; Liu, J.; Zhang, H.; Wang, W.; Zhang, Z. Tracking control for multi-agent systems with model switching and topological switching: A novel dual-switch-based dynamic event-triggered approach. IEEE Trans. Autom. Sci. Eng. 2025, 22, 9353–9362. [Google Scholar] [CrossRef]
  30. Yang, S.; Li, X.D. Quantized iterative learning control for consensus of switched nonlinear heterogeneous multi-agent systems. Nonlinear Dyn. 2025, 113, 6695–6716. [Google Scholar] [CrossRef]
  31. Zhang, L.; Liang, Y.; Li, Y.; Lu, S.; Yang, J. Switched control of fixed-wing aircraft for continuous airdrop of heavy payloads. J. Guid. Control Dyn. 2023, 46, 1826–1833. [Google Scholar] [CrossRef]
  32. Meng, Y.; Ye, H.; Yang, X.; Xiang, Z. Design and analysis of a multimodal hybrid amphibious vehicle. IEEE/ASME Trans. Mechatronics 2025, 30, 7161–7171. [Google Scholar] [CrossRef]
  33. Xue, M.; Tang, Y.; Ren, W.; Qian, F. Practical output synchronization for asynchronously switched multi-agent systems with adaption to fast-switching perturbations. Automatica 2020, 116, 108917. [Google Scholar] [CrossRef]
  34. Zou, W.; Shi, P.; Xiang, Z.; Shi, Y. Finite-time consensus of second-order switched nonlinear multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1757–1762. [Google Scholar] [CrossRef] [PubMed]
  35. Saat, S.; Ahmad, M.A.; Ghazali, M.R. Data-driven brain emotional learning-based intelligent controller-PID control of MIMO systems based on a modified safe experimentation dynamics algorithm. Int. J. Cogn. Comput. Eng. 2025, 6, 74–99. [Google Scholar] [CrossRef]
  36. Lu, K.; Wang, H.; Zhang, H.; Wang, L. Convergence in high probability of distributed stochastic gradient descent algorithms. IEEE Trans. Autom. Control 2023, 69, 2189–2204. [Google Scholar] [CrossRef]
  37. Mohamad Nor, M.H.; Ismail, Z.H.; Ahmad, M.A. Broadcast control of multi-robot systems with norm-limited update vector. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420945958. [Google Scholar] [CrossRef]
  38. Bhatnagar, S.; Borkar, V.S. Multiscale chaotic SPSA and smoothed functional algorithms for simulation optimization. Simulation 2003, 79, 568–580. [Google Scholar] [CrossRef]
  39. Chen, B.; Chu, B. Distributed norm optimal iterative learning control for high performance consensus tracking. IEEE Trans. Autom. Control 2025. [Google Scholar] [CrossRef]
  40. Li, B.; Lan, T.; Zhao, Y.; Lyu, S. Open-loop and closed-loop Dα-type iterative learning control for fractional-order linear multi-agent systems with state-delays. J. Syst. Eng. Electron. 2021, 32, 197–208. [Google Scholar]
  41. Lan, T.; Li, B.; Dai, M. Consensus of fractional-order multi-agent systems using iterative learning control in the sense of Lebesgue-p norm. Asian J. Control 2022, 24, 1293–1303. [Google Scholar] [CrossRef]
  42. Luo, D.; Wang, J.; Shen, D. Consensus tracking problem for linear fractional multi-agent systems with initial state error. Nonlinear Anal. Model. Control 2020, 25, 766–785. [Google Scholar] [CrossRef]
  43. Lan, Y.H.; Bin, W.; Zhou, Y. Iterative learning consensus control with initial state learning for fractional order distributed parameter models multi-agent systems. Math. Methods Appl. Sci. 2022, 45, 5–20. [Google Scholar]
  44. Daftardar-Gejji, V. Fractional Calculus and Fractional Differential Equations; Springer: Singapore, 2019. [Google Scholar]
  45. Diethelm, K.; Ford, N. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010; Volume 2004. [Google Scholar]
  46. Wang, L.; Yu, J.; Zhang, R.; Li, P.; Gao, F. Iterative learning control for multiphase batch processes with asynchronous switching. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 2536–2549. [Google Scholar] [CrossRef]
  47. Zhou, F.; Wang, Y. Iterative learning control for fractional order nonlinear system with initial shift. Nonlinear Dyn. 2021, 106, 3305–3314. [Google Scholar] [CrossRef]
  48. Feng, Q.; Zheng, B. Generalized Gronwall–Bellman-type delay dynamic inequalities on time scales and their applications. Appl. Math. Comput. 2012, 218, 7880–7892. [Google Scholar]
  49. Yang, S.; Yu, W.; Liu, Z.; Ma, F. A robust hybrid iterative learning formation strategy for multi-unmanned aerial vehicle systems with multi-operating modes. Drones 2024, 8, 406. [Google Scholar]
  50. Chen, Y.; Wen, C.; Gong, Z.; Sun, M. An iterative learning controller with initial state learning. IEEE Trans. Autom. Control 1999, 44, 371–376. [Google Scholar] [CrossRef]
  51. Batiha, I.M.; Ababneh, O.Y.; Al-Nana, A.A.; Alshanti, W.G.; Alshorm, S.; Momani, S. A numerical implementation of fractional-order PID controllers for autonomous vehicles. Axioms 2023, 12, 306. [Google Scholar] [CrossRef]
  52. Hu, G.; Xu, D.; Jiang, B.; Pan, T.; Hua, W. Fixed-time cooperative model-free sliding mode control for fractional-order multi-motor systems. IEEE Trans. Autom. Sci. Eng. 2025, 22, 22950–22961. [Google Scholar]
Figure 1. The schematic diagram of the switched FONHMASs (5).
Figure 1. The schematic diagram of the switched FONHMASs (5).
Fractalfract 10 00098 g001
Figure 2. The block diagram of the proposed Dα-type ILC protocol (27) with the iterative learning mechanism (20).
Figure 2. The block diagram of the proposed Dα-type ILC protocol (27) with the iterative learning mechanism (20).
Fractalfract 10 00098 g002
Figure 3. The switching communication topology of the switched FONHMASs. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Figure 3. The switching communication topology of the switched FONHMASs. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Fractalfract 10 00098 g003
Figure 4. Switching signal in Example I.
Figure 4. Switching signal in Example I.
Fractalfract 10 00098 g004
Figure 5. The bipartite consensus tracking profiles of ten follower agents at 1-st iteration in Example I. (a) y ( j ) ( t , 1 ) , j Φ 1 . (b) y ( j ) ( t , 1 ) , j Φ 2 .
Figure 5. The bipartite consensus tracking profiles of ten follower agents at 1-st iteration in Example I. (a) y ( j ) ( t , 1 ) , j Φ 1 . (b) y ( j ) ( t , 1 ) , j Φ 2 .
Fractalfract 10 00098 g005
Figure 6. The bipartite consensus tracking profiles of ten follower agents at 100-th iteration in Example I. (a) y ( j ) ( t , 100 ) , j Φ 1 . (b) y ( j ) ( t , 100 ) , j Φ 2 .
Figure 6. The bipartite consensus tracking profiles of ten follower agents at 100-th iteration in Example I. (a) y ( j ) ( t , 100 ) , j Φ 1 . (b) y ( j ) ( t , 100 ) , j Φ 2 .
Fractalfract 10 00098 g006
Figure 7. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example I, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 } .
Figure 7. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example I, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 } .
Fractalfract 10 00098 g007
Figure 8. The bipartite consensus tracking errors of ten follower agents along the iteration direction in Example I. (a) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 1 . (b) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 2 .
Figure 8. The bipartite consensus tracking errors of ten follower agents along the iteration direction in Example I. (a) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 1 . (b) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 2 .
Fractalfract 10 00098 g008
Figure 9. Switching signal in Example II.
Figure 9. Switching signal in Example II.
Fractalfract 10 00098 g009
Figure 10. The bipartite consensus tracking profiles of ten follower agents at 1-st iteration in Example II. (a) y ( j ) ( t , 1 ) , j Φ 1 . (b) y ( j ) ( t , 1 ) , j Φ 2 .
Figure 10. The bipartite consensus tracking profiles of ten follower agents at 1-st iteration in Example II. (a) y ( j ) ( t , 1 ) , j Φ 1 . (b) y ( j ) ( t , 1 ) , j Φ 2 .
Fractalfract 10 00098 g010
Figure 11. The bipartite consensus tracking profiles of ten follower agents at 100-th iteration in Example II. (a) y ( j ) ( t , 100 ) , j Φ 1 . (b) y ( j ) ( t , 100 ) , j Φ 2 .
Figure 11. The bipartite consensus tracking profiles of ten follower agents at 100-th iteration in Example II. (a) y ( j ) ( t , 100 ) , j Φ 1 . (b) y ( j ) ( t , 100 ) , j Φ 2 .
Fractalfract 10 00098 g011
Figure 12. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example II, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 } .
Figure 12. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example II, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 } .
Fractalfract 10 00098 g012
Figure 13. The bipartite consensus tracking errors of ten follower agents along the iteration direction in Example II. (a) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 1 . (b) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 2 .
Figure 13. The bipartite consensus tracking errors of ten follower agents along the iteration direction in Example II. (a) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 1 . (b) sup t [ 0 , 2 ] e ( j ) ( t , σ ) , j Φ 2 .
Fractalfract 10 00098 g013
Figure 14. The switching communication topology of the switched FONHMASs in Example III. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Figure 14. The switching communication topology of the switched FONHMASs in Example III. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Fractalfract 10 00098 g014
Figure 15. Switching signal in Example III.
Figure 15. Switching signal in Example III.
Fractalfract 10 00098 g015
Figure 16. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example III, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 , 11 , 12 } .
Figure 16. The bipartite consensus tracking profiles of ten follower agents at 250-th iteration in Example III, where Φ 1 = { 3 , 4 , 6 , 7 , 9 , 10 } , and Φ 2 = { 1 , 2 , 5 , 8 , 11 , 12 } .
Fractalfract 10 00098 g016
Figure 17. The comparative experimental results with other methods. (a) RMSE in iteration domain. (b) RMSE in time domain.
Figure 17. The comparative experimental results with other methods. (a) RMSE in iteration domain. (b) RMSE in time domain.
Fractalfract 10 00098 g017
Figure 18. The comparative experimental results with different disturbances. (a) Trigonometric functions of different amplitudes. (b) Different-type disturbances.
Figure 18. The comparative experimental results with different disturbances. (a) Trigonometric functions of different amplitudes. (b) Different-type disturbances.
Fractalfract 10 00098 g018
Figure 19. The switching communication topology of the fractional-order multi-motor systems in Example IV. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Figure 19. The switching communication topology of the fractional-order multi-motor systems in Example IV. The black line represents cooperative interaction, while the red line represents antagonistic interaction. (a) G ¯ 1 . (b) G ¯ 2 .
Fractalfract 10 00098 g019
Figure 20. The switching signal of the switched fractional-order multi-motor systems in Example IV.
Figure 20. The switching signal of the switched fractional-order multi-motor systems in Example IV.
Fractalfract 10 00098 g020
Figure 21. The bipartite consensus tracking results of the switched fractional-order multi-motor systems in Example IV. (a) 1-st iteration. (b) 15-th iteration.
Figure 21. The bipartite consensus tracking results of the switched fractional-order multi-motor systems in Example IV. (a) 1-st iteration. (b) 15-th iteration.
Fractalfract 10 00098 g021
Figure 22. The bipartite consensus tracking results of the switched fractional-order multi-motor systems in Example IV. (a) Output velocity trajectories at 40-th iteration. (b) Tracking errors along the iteration direction.
Figure 22. The bipartite consensus tracking results of the switched fractional-order multi-motor systems in Example IV. (a) Output velocity trajectories at 40-th iteration. (b) Tracking errors along the iteration direction.
Fractalfract 10 00098 g022
Figure 23. The comparative results of the switched fractional-order multi-motor systems in Example IV. (a) Comparative results with other methods. (b) Comparative results with different disturbances.
Figure 23. The comparative results of the switched fractional-order multi-motor systems in Example IV. (a) Comparative results with other methods. (b) Comparative results with different disturbances.
Fractalfract 10 00098 g023
Table 1. Comparison of learning-based controllers for FOMAS.
Table 1. Comparison of learning-based controllers for FOMAS.
Learning-Based ControllerWang et al. [7]Wang et al. [20]Li et al. [40]Lan et al. [41]Luo et al. [42]Lan et al. [43]Our Work
Nonlinear××××
Heterogeneous×××
Arbitrary initial states××××××
Robustness analysis××××××
Switched dynamics××××××
Table 2. Summary of theoretical results of this paper.
Table 2. Summary of theoretical results of this paper.
TheoremControl ObjectiveKey AssumptionsProtocolOutcome
Theorem 1Initial State LearningAssumptions 1 and 2Initial state learning
law (20)
Asymptotic convergence:
The initial tracking error e ( 0 , σ ) converges to zero as σ .
Theorem 2Bipartite Consensus
Tracking
Assumptions 1 and 2 D α -type ILC (27)Asymptotic convergence:
The tracking error e ( t , σ ) converges to zero for all t [ 0 , T ] as σ .
Theorem 3Robustness AnalysisAssumptions 1–3 D α -type ILC (27)Uniform boundedness:
The tracking error remains uniformly bounded. If disturbances vanish, the error converges to zero.
Table 3. The MRMSE and ARMSE of four methods in Example III and IV.
Table 3. The MRMSE and ARMSE of four methods in Example III and IV.
MethodExample IIIExample IV
MRMSEARMSEMRMSEARMSE
Dα-type ILC with (20)0.07250.01080.01670.0145
Dα-type ILC without (20)0.10440.0139--
Integer-order ILC0.36880.11940.36010.1769
Fractional-order PID1.04700.32270.75500.3154
Table 4. The iteration numbers required to reach different error thresholds in Example III and IV.
Table 4. The iteration numbers required to reach different error thresholds in Example III and IV.
MethodExample III ( σ max = 250 )Example IV ( σ max = 40 )
MRMSE < 0.5MRMSE < 0.1MRMSE < 0.05MRMSE < 0.5MRMSE < 0.1MRMSE < 0.05
Dα-type ILC with (20)41105141142228
Dα-type ILC without (20)126-----
Integer-order ILC---34--
Table 5. The MRMSE and ARMSE of different disturbances in Example III and IV.
Table 5. The MRMSE and ARMSE of different disturbances in Example III and IV.
DisturbanceExample IIIExample IV
MRMSEARMSEMRMSEARMSE
Disturbance-free0.07250.01670.01450.0145
Trigonometric function0.08330.01190.01680.0144
Stochastic function0.06150.01140.01610.0140
Impulsive function0.08160.01170.01750.0128
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, S.; Chen, S. Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions. Fractal Fract. 2026, 10, 98. https://doi.org/10.3390/fractalfract10020098

AMA Style

Yang S, Chen S. Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions. Fractal and Fractional. 2026; 10(2):98. https://doi.org/10.3390/fractalfract10020098

Chicago/Turabian Style

Yang, Song, and Siyuan Chen. 2026. "Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions" Fractal and Fractional 10, no. 2: 98. https://doi.org/10.3390/fractalfract10020098

APA Style

Yang, S., & Chen, S. (2026). Iterative Learning Bipartite Consensus Control for Fractional-Order Switched Nonlinear Heterogeneous MASs with Cooperative and Antagonistic Interactions. Fractal and Fractional, 10(2), 98. https://doi.org/10.3390/fractalfract10020098

Article Metrics

Back to TopTop