Next Article in Journal
Torque Ripple Reduction in BLDC Motors Using Phase Current Integration and Enhanced Zero Vector DTC
Previous Article in Journal
SEU Cross-Section Estimation Using ECORCE TCAD Tool
Previous Article in Special Issue
Bridging Quantitative Scoring and Qualitative Grading: A Mapping Framework for Intelligent System Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fixed-Time Convergence Method for Solving Aggregative Games with Malicious Players

1
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
2
Hunan Vanguard Group Corporation Limited, Changsha 410100, China
3
School of Robotics, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(15), 2998; https://doi.org/10.3390/electronics14152998
Submission received: 3 July 2025 / Revised: 21 July 2025 / Accepted: 24 July 2025 / Published: 28 July 2025
(This article belongs to the Special Issue Advanced Control Strategies and Applications of Multi-Agent Systems)

Abstract

This paper aims to investigate a Nash equilibrium (NE)-seeking approach for the aggregative game problem of second-order multi-agent systems (MAS) with uncontrollable malicious players, which may cause the decisions of global players to become uncontrollable, thereby hindering the ability of normal players to reach the NE. To mitigate the influence of malicious players on the system, a malicious player detection and disconnection (MPDD) algorithm is proposed, based on the fixed-time convergence method. Subsequently, a predefined-time distributed NE-seeking algorithm is presented, utilizing a time-varying, time-based generator (TBG) and state-feedback scheme, ensuring that all normal players complete the game problem within the predefined time. The convergence properties of the algorithms are analyzed using Lyapunov stability theory. Theoretically, the aggregative game problem with malicious players can be solved using the proposed algorithms within any user-defined time. Finally, a numerical simulation of electricity market bidding verifies the effectiveness of the proposed algorithm.

1. Introduction

In the context of aggregative games, each player updates its decision to optimize its local cost function, which depends on both its own decision and the aggregate of all players’ decisions. Any unilateral change made by a player will directly affect the global cost function through this aggregate. The aggregative game problem is prevalent in various fields, such as electric vehicle charging scheduling, energy network management, and drone network configuration (see [1,2,3]).
There has been considerable attention on NE-seeking algorithms for aggregative games, with many meaningful results presented in decentralized or semi-decentralized settings [4,5]. As large-scale systems continue to evolve, distributed strategies have attracted significant interest from researchers, primarily due to their reliance on local information exchange and their scalability. This has led to the further development of network games [6]. In many scenarios, the feasible sets of players’ decisions are interdependent, prompting studies on generalized NE (GNE)-seeking problems for aggregative games, where coupling constraints exist among the decisions of players. For example, the authors of [7] proposed an algorithm based on projected dynamics and nonsmooth tracking dynamics for aggregative games with coupled constraints. A single-layer distributed algorithm was introduced in [8], in which communication between players was only required when their decisions were updated. The variational generalized Nash equilibrium of a class of fuzzy set games was discussed in [9], where each participant’s cost function is fuzzy and nonsmooth, and the strategy configuration is subject to coupling constraints. Recently, reinforcement learning-based multi-agent game algorithms have attracted widespread attention (see, e.g., [10,11]).
To enable players to solve game problems autonomously, several works have combined game theory with physical systems. For example, distributed aggregative games have been studied for second-order nonlinear systems [12], multiple heterogeneous Euler–Lagrange systems [13,14], and players with linear or nonlinear dynamics perturbed by external disturbances [15,16]. In addition, numerous NE- or GNE-seeking problems in noncooperative games have been explored for first-order nonlinear systems [17], second-order systems [18], multi-integrator systems [19], high-order nonlinear systems [20], and population dynamics [21]. Furthermore, many works have focused on distributed resource allocation and optimization problems for second-order systems [22] and high-order nonlinear MASs [23]. These works can be classified into two categories: nominal systems [13,14,18,19,20,21,23,24] and disturbed systems [12,15,16,17,22].
It is clear that the dynamics of players play a crucial role in algorithm design. Specifically, the proposed algorithm must simultaneously ensure system stability and convergence of players’ decisions to the NE. Compared to nominal systems, algorithm design for disturbed systems is more challenging because it must account for the influence of disturbances on the system. However, all the aforementioned works on disturbed systems share a common feature: the perturbations must satisfy certain assumptions, such as being bounded, differentiable, or time-dependent [12,22], which are difficult to meet in practical applications, especially in systems subject to stochastic noise [25]. Moreover, a sufficiently large perturbation can be interpreted as a malicious attack, which may cause the system to become uncontrollable. Examples include military equipment attacked by enemies, battlefield explosions, or servers in cyber-physical systems compromised by hackers. In aggregative games, since the cost function of players depends on global decisions, the existence of even a single maliciously attacked player prevents the decisions of all players from converging to the optimal solution. In such cases, conventional compensation-based methods are no longer applicable.
It is worth noting that, to date, few works have addressed NE seeking in aggregative games with malicious players. However, many approaches have been proposed in the study of resilient consensus in MASs under malicious attacks, such as the trusted-region-based sliding-window weighted approach for single-integrator MASs [26], the delayed impulsive control strategy for second-order integral MASs [27], the attack-isolation-based approach [28], and the appointed-time observer-based approach [29] for higher-order MASs, where the appointed-time observer-based approach significantly relaxes the graph requirement to only needing a directed spanning tree, handles general linear agent dynamics, and operates asynchronously; however, it relies on reliable communication channels for state exchange and is primarily designed for controller attacks rather than communication channel attacks or Byzantine faults. The isolation-based approach achieves resilient consensus in higher-order networks via distributed fixed-time observers and graph isolability. It requires undirected topologies and two-hop neighbor information, limiting scalability in sparse networks compared to neighbor-value exclusion methods like MSR. The impulsive control method eliminates the need to know the number of malicious agents by leveraging trusted nodes and delayed sampled data, but it requires pre-labeled trusted agents and is limited to double-integrator dynamics. All of these approaches require first detecting the attacked players. In contrast to resilient consensus problems, there is less information interaction between players in aggregative games. For example, the output and state of players cannot directly interact with each other, which increases the difficulty of detecting attacked players. Additionally, the objective of each player in an aggregative game is to optimize its local cost function, rather than to reach consensus. Recent surveys, such as [30,31], comprehensively categorize intermittent control methods for multi-agent systems (MASs) under limited communication, highlighting strategies to reduce resource consumption via periodic sampling and event-triggered mechanisms. However, they exhibit limitations in adversarial settings, particularly when malicious players disrupt the optimization process in non-cooperative games, due to their reliance on cooperative dynamics.
Motivated by the above discussion, this paper focuses on aggregative games for second-order MASs under malicious attacks. The primary objective is to develop an approach that detects attacked players within a fixed time and disconnects them from normal players. Ultimately, the decisions of normal players are designed to converge to the NE within the predefined time. The main contributions of this paper are outlined as follows:
  • This work considers malicious players who are uncontrollable and can influence the evolution of normal players’ decisions. In contrast to existing works that consider perturbations and eliminate their effects using compensation methods [12,15,16,17,22], this work treats the influence of malicious attacks as less conservative and more representative of real-workd conditions, thereby rendering existing algorithms inapplicable.
  • Due to the limited information exchange between neighbors, a virtual system and a distributed observer are introduced to detect and disconnect malicious players. A novel MPDD algorithm, based on the fixed-time convergence method, is proposed to ensure that all malicious players are disconnected from normal players within a fixed time.
  • A predefined-time distributed NE-seeking algorithm is proposed, based on the time-varying TBG scheme, to ensure that the decisions of all normal players converge to an arbitrarily small neighborhood of the NE within the predefined time and exponentially converge to the NE after the predefined time. Convergence analysis is performed using Lyapunov stability theory.
The remainder of this paper is organized as follows. In Section II, the necessary preliminaries are introduced and our problem is formulated. In Section III, the main results of this paper are presented, including the MPDD algorithm and the NE-seeking algorithm, along with an analysis of their convergence. In Section IV, a numerical example is presented to verify the effectiveness of the proposed algorithms. In Section V, we present the conclusions.
Notations: In this paper, R , R + , R n , and R n × m denote the set of real numbers, the set of nonnegative real numbers, the set of n-dimensional real vector spaces, and the set of n × m real matrix spaces, respectively. A T denotes the transpose of matrix A. x R n , and x i is the ith element of vector x, defined as col ( x 1 , , x n ) = [ x 1 T , , x n T ] T . · denotes the standard Euclidean norm, and ⊗ denotes the Kronecker product. 1 n and 0 n are the column vectors of n ones and zeros, respectively. I n denotes an n × n identity matrix. sig ( x ) a = sign ( x ) | x | a , and sign ( x ) is a standard sign function. Given that a set-valued function F : R n R is convex, if x , y R n and α [ 0 1 ] , F ( α x + ( 1 α ) y ) α F ( x ) + ( 1 α ) F ( y ) . A function F : R n R is ω -strongly monotone ( ω > 0 ) if x , y R n ( x y ) T ( F ( x ) F ( y ) ) ω x y 2 . A function F : R n R is μ -Lipschitz ( μ > 0 ) if x , y R n , F ( x ) F ( y ) μ x y .

2. Preliminaries

2.1. Graph Theory

Consider a graph G ( V , E , A ) , where V : = 1 , , N is the set of vertices, E V × V is the set of edges, and A : = [ a i j ] R N × N is the adjacency matrix. Each entry of the matrix satisfies a i j = a j i > 0 if ( i , j ) E , and a i j = 0 otherwise. Additionally, a i i = 0 . Denote D i = j = 1 N a i j , i V and D = diag ( D 1 , D 2 , , D N ) . The Laplacian matrix of the graph G is L = D A . λ i denotes the ith eigenvalue of the matrix L, and λ i λ j for i j . An undirected graph is connected if there is a path between each pair of nodes. Then, we have 1 N T L = 0 N T and L 1 N = 0 N . To facilitate description, we do not discriminate among agents, players, and nodes.

2.2. Problem Formulation

Consider an aggregative game problem with N players, where their communication topology is described by a graph G with the node set V = { 1 , 2 , , N } . There are two types of players in the system: normal players and malicious players. The dynamics of player i are described as
x ˙ i = v i , v ˙ i = u i , i V N x i = g i ( x i , t ) , i V M
where x i and v i represent the position and velocity of player i, respectively; u i R represents the control input; g i is an unknown time-varying function denoting the influence of malicious attacks; V N and V M denote the sets of normal players and malicious players, respectively; and V = V N V M .
Remark 1.
To simplify the description of complicated situations, the dynamics of malicious players are expressed as x i = g i ( x i , t ) , i V M . This means that malicious players update their position state without following the preset control algorithm due to malicious attacks, which may occur in the control input or actuator sensor. In this paper, the position state is regarded as the output and decision of players. Therefore, the dynamics of malicious player i V M can also be expressed as
x ˙ i = v i , v ˙ i = u i + g i ( x i , t ) y i = x i . o r x ˙ i = v i , v ˙ i = u i y i = x i + g i ( x i , t ) .
The two types of players are defined below.
Definition 1.
Normal players are those who update their decisions according to the preset control algorithm and send the actual information to all their neighbors. Malicious players, on the other hand, update their decisions but do not follow the preset control algorithm, and the information transmitted to their neighbors is distorted.
Each player has a local cost function J i ( x i , x i ) : R N R that is only available to itself, where x i represents the decisions of players other than i. Let x = col ( x 1 , x 2 , , x N ) R N , and let the aggregate function σ ( x ) : R N R denote the aggregate of all players’ decisions, indicating that each player’s cost function is affected by the decisions of others. It is defined as
σ ( x ) = 1 N i = 1 N φ i x i
where φ i ( x i ) is a (nonlinear) function representing the local contribution to the aggregate and is globally Lipschitz continuous. The aggregate function σ specifies the gradients of cost functions as x i J i ( x i , x i ) = ϑ i ( x i , σ ( x ) ) with a function ϑ i : R N R , and G i ( x i , s i ) = ϑ i ( x i , σ ( x ) ) | σ ( x ) = s i , where s i is the estimate of the aggregate function by player i (because σ ( x ) contains global decision information that is not available to players). Moreover, G ( · ) is defined by stacking together the gradients of the cost functions of all players as G ( x , s ) = col ( G 1 ( x 1 , s 1 ) , G 2 ( x 2 , s 2 ) , , G N ( x N , s N ) ) .
The NE-seeking strategy for MASs considers the influence of the decisions of other players and adjusts the control input to minimize their cost function J i ( x i , x i ) . Due to the existence of uncontrollable malicious players, the evolution of their decisions does not depend on the designed algorithm and may mislead the game task. Therefore, the objective of this paper is to detect malicious players and filter them out so that all normal players’ decisions reach the NE. That is, each normal player i faces the following optimization problem:
min x i R J i ( x i , x i ) , i V N .
For the game (3), the decision x i * of player i V N is the NE if J i ( x i * , x i * ) J i ( x i , x i * ) , which means that no agent can decrease its cost function by unilaterally changing x i to any other feasible point.
Some standard assumptions are given below, which are widely used in NE-seeking problems (see, e.g., [7,12,32]).
Assumption 1.
The undirected graph G considered in this paper is connected.
Assumption 2.
The cost function J i ( x i , x i ) is convex with respect to x i for every fixed x i and is continuously differentiable in x i , i V N .
Assumption 3.
G ( x , s ) is ω-strongly monotone and μ-Lipschitz continuous in R 2 N .
Assumption 4.
There exists at least one normal player among the neighbors of each normal player, and malicious players are not the root node of the communication graph.
Remark 2.
Assumption 1 is the premise that players communicate with neighbors. Assumptions 2 and 3 ensure the existence and uniqueness of the NE solution. By Assumption 4, there is at least one connected edge between all normal players.
Lemma 1
([12]). By Assumptions 2 and 3, x * is the NE of the game (3) if and only if
x i J i ( x i * , x i * ) = G i ( x i * , s i * ) = 0 s i * = s j * = σ ( x * ) = 1 N N i = 1 N N φ i x i * , i , j V N
where N N denotes the number of normal players.
Lemma 2
([33]). Suppose that a graph G is undirected and connected. For its Laplacian matrix L and any non-unit vector x, we have the following conclusion:
x T L x λ 2 ( L ) x 2 , x T ( L 2 ) x λ m a x ( L 2 ) x 2 .
where λ 2 ( L ) and λ m a x ( L 2 ) denote the second smallest eigenvalue of L and the largest eigenvalue of L 2 , respectively.
Lemma 3
([34]). Consider the following system:
x ˙ = f ( x , t ) , x ( 0 ) = x 0
where x R n , and f ( · ) : R n × R + R n is a nonlinear function. There exists a positive, continuously differentiable, and radially unbounded function V ( x ) : R n R + such that for any solution x ( t ) R n { 0 } , V ˙ ( x ( t ) ) α ( V ( x ( t ) ) ) p + β ( V ( x ( t ) ) ) q k , where the parameters satisfy α , β , p , q , k > 0 , and p k < 1 , q k > 1 . Then, the origin point of system (6) is globally fixed-time stable, and the convergence time satisfies T T m a x = 1 / [ β k ( q k 1 ) ] + 1 / [ α k ( 1 p k ) ] , x 0 R n .

3. Algorithm Design for Aggregative Games with Malicious Players

Based on the above analysis, in this section, algorithms are designed to detect and disconnect malicious players, as well as for NE seeking by normal players.
Unlike existing consensus and non-cooperative game problems, in which agents (or neighbors in distributed interactions) can exchange decision-making information, the aggregative game problem presents a different challenge. In this case, players do not directly exchange decision information, and even the function φ i ( x i ) that governs the decision variables cannot be shared. This lack of information exchange makes it impossible to identify malicious individuals by simply comparing interaction data between players. Inspired by the resilient consensus problem in [28], we address the aggregative game problem with malicious players using the following three steps:
(1)
Fixed-time stability: A fixed-time stabilization algorithm is developed to ensure that the decisions of all normal players stabilize within a fixed time.
(2)
Malicious player detection and disconnection: A malicious player detection and isolation algorithm is designed to detect and disconnect each malicious player from the normal players.
(3)
Predefined-time convergence: A predefined-time distributed NE-seeking algorithm is proposed to ensure that all normal players’ decisions converge to the NE at the predefined time.
Remark 3.
In the flowchart depicted in Figure 1, Steps 1 and 2 serve as additional processes aimed at eliminating the influence of malicious players on normal players. Specifically, the virtual systems and the observer function as hidden layers that do not interfere with the state updates of the players. Unlike noncooperative games or consensus problems, in the aggregative game considered in this paper, state information cannot be exchanged between neighbors. Given this premise, detecting malicious players through state-error calculations is not feasible. Therefore, a novel detection algorithm is proposed.

3.1. Fixed-Time Stabilization Algorithm

To account for the influence of malicious attacks, a fixed-time convergence algorithm is designed to ensure that the system states of normal players stabilize within a finite time. In contrast, the states of malicious players do not stabilize.
The dynamics of the players (1) imply that convergence of the velocity state to the origin is both a necessary and sufficient condition for the system’s stability. Based on this, a fixed-time stabilization algorithm for player i , i V , is designed as follows:
u i = α sig ( v i ) p β sig ( v i ) q
where α , β > 0 and 0 < p < 1 , q > 1 are constants. Combining Algorithm (7) and system (1), we get the closed-loop system as
x ˙ i = v i v ˙ i = α sig ( v i ) p β sig ( v i ) q , i V N .
Then, we are in a position to give the following lemma.
Lemma 4.
With the appropriate parameters defined in (7), the closed-loop system (8) is globally stable within a fixed time T 1 1 / [ α ( 1 p ) ] + 1 / [ N 1 q 2 β ( q 1 ) ] .
Proof. 
As discussed in Section 3.1, we only need to analyze the stability of the speed state at the origin point. Choose a candidate Lyapunov function as V 1 ( t ) = i = 1 N 1 2 v i 2 .
Taking the derivative of V 1 ( t ) along (8), we have
V ˙ 1 ( t ) = i = 1 N v i v ˙ i = α i = 1 N | v i | p + 1 β i = 1 N | v i | q + 1 .
With the inequalities that for x i , ρ R + , i = 1 , 2 , , n , ( x 1 + + x n ) ρ x 1 ρ + + x n ρ , ρ ( 0 , 1 ) and n 1 ρ ( x 1 + + x n ) ρ x 1 ρ + + x n ρ , ρ [ 1 , ) , we can get
i = 1 N | v i | p + 1 i = 1 N | v i | 2 p + 1 2 ,
i = 1 N | v i | q + 1 N 1 q 2 i = 1 N | v i | 2 q + 1 2 .
Then, (9) can be rewritten as
V ˙ 1 ( t ) 2 α V 1 p + 1 2 + 2 β N 1 q 2 V 1 q + 1 2 .
From Lemma 3, we can deduce that the convergence time satisfies T 1 T m a x = 1 / [ α ( 1 p ) ] + 1 / [ N 1 q 2 β ( q 1 ) ] .    □
Remark 4.
Lemma 4 illustrates that the states of all normal players are stable for t T 1 . Meanwhile, the function φ i ( x i ( t ) ) is stable, which implies that φ ˙ i ( x i ( t ) ) = 0 , i V N . For malicious player i V M , however, φ ˙ i ( x i ( t ) ) 0 . Thereafter, φ ˙ i ( x i ( t ) ) can be used as the impact of malicious attacks for the detection of malicious players.

3.2. Detecting and Disconnecting Malicious Players

Due to the fact that a player’s decision x i , the function φ i ( x i ( t ) ) , and φ ˙ i ( x i ( t ) ) cannot directly interact with those of neighboring players, it is not possible, for any player i V , to determine whether the decision information of its neighbors converges to a fixed value based on the fixed-time stability algorithm. Consequently, we cannot identify which neighbors are uncontrollable. First, a virtual system is introduced as
x ^ ˙ i = A x ^ i + B τ i y ^ i = C x ^ i + D φ ˙ i ( x i ( t ) ) , i V
where x ^ i R n , y ^ i R m , and τ i R r denote the system state, output, and input, respectively, and their dimensions satisfy n > r and n > m . A , B , C , D are the constant known system matrices satisfying the corresponding dimensions.
Remark 5.
In the virtual system (12), φ ˙ i ( x i ( t ) ) is appended to the output as an external variable, which affects the update of the system state based on the output feedback algorithm. Then, a consensus problem is defined for the detection of malicious players.
As discussed above, the consensus error for player i with its neighbor j is defined as x ^ i j = x ^ i x ^ j , i V , j N i . The distributed fixed-time observer relying on the relative output is introduced by referring to [35] to estimate the consensus error at the fixed time T 2 as
w ˙ i j = A c w i j + B c ( y ^ i y ^ j ) ξ i j = D c w i j ( t ) exp ( A c T 2 ) w i j ( t T 2 ) x ¯ i j = ξ i j E ( y ^ i y ^ j )
where w i j ( t ) and ξ i j ( t ) are auxiliary variables, with the initial value set to 0 for T 2 t 0 . x ¯ i j is the estimation of x ^ i j . The other matrices are defined as
A c = G A K 1 C 0 0 G A K 2 C , B c = B c 1 B c 2 , C c = I n I n ,
and D c = I n 0 C c exp ( A c T 2 ) C c T , where G = I + E C , E = B [ ( C B ) T C B ] 1 ( C B ) T , and K i , i = 1 , 2 is the observer gain such that G A K i C is stable. Additionally, B c i = K i ( I + C E ) G A E . Invoking Theorem 1 in [35] and Lemma 1 in [28], we obtain the following lemma.
Lemma 5.
For the virtual system (12), if φ ˙ i ( x i ( t ) ) = 0 , i V , the observer designed as (13) can accurately estimate the consensus error at the fixed time T 2 , that is x ¯ i j x ^ i j for t T 2 . Moreover, if φ ˙ i ( x i ( t ) ) 0 , the estimation error can be described as x ^ i j = x ^ i j x ¯ i j = D c t T 2 t exp ( A c ( t s ) ) B c D ( φ ˙ i ( x i ) φ ˙ i ( x j ) ) d s + E D ( φ ˙ i ( x i ) φ ˙ i ( x j ) ) for t T 2 .
Lemma 5 introduces a method for detecting malicious players. Specifically, we can determine whether a player or its neighbors are malicious by evaluating whether the estimation error equals zero. Building on the results from Algorithm (7) and the fixed-time observer, the MPDD algorithm is summarized below.
Theorem 1.
Suppose that Assumptions 1 and 4 hold. All malicious players can be disconnected from normal players under the MPDD algorithm.
Proof. 
In Algorithm 1, both the system 8 and the observer (13) are convergent for t max ( T 1 , T 2 ) . By Lemma 4 and Remark 4, we have φ ˙ i ( x i ( t ) ) = 0 , i V N and φ ˙ i ( x i ( t ) ) 0 , i V M . Combining this with Lemma 5, the conclusion is straightforward.    □
Remark 6.
Referring to the proof of Theorem 1 in [35], the output of the observer in (13) is independent of the input from the virtual system in (12) and depends solely on the relative output. In contrast to [28] and [35], the consensus error in this paper is defined as the state error between a player and its neighbor, rather than as the sum of state errors across all neighbors. The goal of the MPDD algorithm is to disconnect normal players from malicious players and eliminate the influence of malicious players on the evolution of normal players’ decisions. Notably, it is not necessary to know the identities of the malicious players. Therefore, there is no need to obtain information about two-hop neighbors (i.e., the neighbors of a player’s neighbors). Furthermore, Assumption 4 ensures that the communication graph among normal players remains connected after all malicious players are disconnected.
Algorithm 1 MPDD algorithm
1:
For each player i and its neighbors j N i ;
2:
for   t > 0   do
3:
   Update players’ states and φ ˙ i ( x i ( t ) ) , φ ˙ j ( x j ( t ) ) based on Algorithm (7);
4:
   Observer’s output x ¯ i j and estimation error x ^ i j based on (12) and (13);
5:
   if  t max ( T 1 , T 2 )  then
6:
     for each j N i  do
7:
        if  x ^ i j 0  then
8:
           a i j = 0 ;
9:
           N i = N i j ;
10:
        end if
11:
     end for
12:
   end if
13:
end for

3.3. Predefined-Time Convergence Algorithm

In this subsection, a distributed NE-seeking algorithm is developed to ensure that all players’ decisions converge to the NE at a predefined time. First, the definition of predefined-time convergence is given.
Definition 2.
The aggregative game (3) with the system (1) is said to achieve predefined-time convergence when the following conditions are satisfied for any initial value x i ( 0 ) , i V N :
lim t t f x i ( t ) x i * ϵ x i ( t ) x i * ϵ , t > t f lim t x i ( t ) x i * = 0
where ϵ is an arbitrarily small positive constant and t f is the predefined convergence time designed to be independent of the initial value.
The TBG is defined as the following continuous, differentiable function:
η ( t ) = 0 , if t = 0 η ( t ) = 1 , if t t f , η ˙ ( t ) = 0 , if t = 0 or t t f η ˙ ( t ) > 0 , t 0 , t f .
Lemma 6
([36,37]). Consider a dynamic system of the following form:
x ˙ ( t ) = ε k ( t ) x ( t ) , x ( 0 ) = x 0
where ε is a positive parameter and k ( t ) = η ˙ ( t ) / ( 1 η ( t ) + κ ) with 0 κ 1 . x ( t ) converges to ( κ / [ 1 + κ ] ) ε x 0 at a predefined time t f irrespective of the initial value x 0 .
For the game (3) of multi-agent systems described in (1), a predefined-time distributed NE-seeking algorithm is designed as
u i = ( k ( t ) + 3 ) b 1 v i ( k ( t ) + 1 ) b 2 G i ( x i , s i ) s ˙ i = ( k ( t ) + 1 ) c j = 1 N a i j ( s i s j ) + j = 1 N a i j ( z i z j ) ( s i φ i ( x i ) ) z ˙ i = ( k ( t ) + 1 ) j = 1 N a i j ( s i s j )
where z i is an auxiliary variable, and b 1 , b 2 , and c are positive constants to be determined later. k ( t ) is the time-varying function parameter described in Lemma 6, and the selection of k ( t ) + 3 and k ( t ) + 1 is aimed at achieving a faster convergence rate and obtaining the predefined-time convergence condition in (33).
Define v = col ( v 1 , v 2 , , v N ) , s = col ( s 1 , s 2 , , s N ) , z = col ( z 1 , z 2 , , z N ) , and φ ( x ) = col ( φ 1 ( x 1 ) , φ 2 ( x 2 ) , , φ N ( x N ) ) . Substituting Algorithm (16) into system (1), we get the closed-loop system in compact form:
x ˙ = v v ˙ = b 1 ( k ( t ) + 3 ) v b 2 ( k ( t ) + 1 ) G ( x , s ) s ˙ = ( k ( t ) + 1 ) ( c L s + L z ( s φ ( x ) ) ) z ˙ = ( k ( t ) + 1 ) L s .
Then, we obtain the following lemma regarding (17).
Lemma 7.
Under Assumptions 1–3, x * is the NE of the game (3) if and only if ( x * , v * , s * , z * ) is an equilibrium point of (17).
Proof. 
If ( x * , v * , s * , z * ) is an equilibrium point of (17), we have the following equations:
v * = 0 N ,
b 1 ( k ( t ) + 3 ) v * b 1 ( k ( t ) + 1 ) G ( x * , s * ) = 0 N ,
c ( L s * L z * ) ( s * φ ( x * ) ) = 0 N ,
L s * = 0 N .
From (18a) and (18b), we get G ( x * , s * ) = 0 N . Since the communication graph G is undirected and connected (i.e., L 1 N = 0 N and 1 N T L = 0 N T ), it follows from (18d) that s * = 1 N v , where v R . Substituting (18d) into (18c), we obtain
c L z * ( s * φ ( x * ) ) = 0 N .
Left-multiplying both sides of the above equation by 1 N N , we have
1 N T ( s * φ ( x * ) ) = 0 .
Given that s * = 1 N v , it follows that
s i * = s j * = 1 N i = 1 N φ i ( x i * ) , i . j V .
Combining (18a), (18b), and (21) with Lemma 1, we conclude that x * is the NE of (3).
Conversely, if x * is the NE of the game (3), from Lemma 1, we have
G ( x * , s * ) = 0 N , s i * = s j * = 1 N i = 1 N φ i x i * , i , j V N .
It is obvious that (18a), (18b), and (18d) hold. For any ς R , there exists ζ = ς 1 N such that ( s * φ ( y * ) ) T ζ = 0 . Under Assumption 1, L 1 N = 0 N , so L ζ = 0 N . By the fundamental theorem of linear algebra [38], L can be orthogonally decomposed by ker(L) and range(L). Hence, ζ k e r ( L ) and ( s * φ ( y * ) ) r a n g e ( L ) , and there exists c z * R N satisfying c L z * + ( s * φ ( y * ) ) = 0 N . In summary, ( x * , v * , s * , z * ) is the equilibrium point of (17).  □
Remark 7.
According to Lemma 7, if x converges to the NE of the game (3), the equilibrium point of system (17) is stable. Conversely, if ( x , v , s , z ) converges to the equilibrium point of (17), it implies that x converges to the NE of the game (3).
Theorem 2.
Suppose that Assumptions 1-3 hold, and the parameters satisfy b 1 > max ( 2 ( μ + 1 ) ω , 1 ) , b 2 > max ( 2 l 2 ω , b 1 ) , and c > max b 2 2 μ 2 ( b 2 ω 2 l 2 + 2 b 1 ) 4 b 1 λ 2 ( L ) ( b 2 ω 2 l 2 ) + 1 , 1 λ 2 ( L ) for system (1) without malicious players, then the aggregative game (3) can be solved by Algorithm (16) in a predefined time t f , that is,
lim t t f x ( t ) x * = 1 b 1 κ 1 + κ λ m i n ( Θ ) λ m a x ( Φ ) V ( 0 ) x ( t ) x * 1 b 1 V ( t f ) exp λ m i n ( Θ ) 2 λ m a x ( Φ ) t t f , t t f lim t x x * = 0
where 0 κ 1 , V ( t ) is the Lyapunov function defined in (25), and Θ and Φ are positive definite matrices defined below.
Proof. 
Define x ˜ = x x * , v ˜ = v v * , s ˜ = s s * , z ˜ = z z * , h = G ( x , s ) G ( x * , s * ) , and φ ˜ ( x ) = φ ( x ) φ ( x * ) . From (17) and (18), one can obtain the following error system:
x ˜ ˙ = v ˜ v ˜ ˙ = b 1 ( k ( t ) + 3 ) v ˜ b 2 ( k ( t ) + 1 ) h s ˜ ˙ = ( k ( t ) + 1 ) ( c L s ˜ + L z ˜ ( s ˜ φ ˜ ( x ) ) ) z ˜ ˙ = ( k ( t ) + 1 ) L s ˜ .
Take the candidate Lyapunov function as
V ( t ) = 1 2 x ˜ + v ˜ 2 + b 1 x ˜ 2 + 1 2 s ˜ + z ˜ 2 + 2 c 1 2 z ˜ 2 = χ T Φ χ T
where Φ = 1 + 2 b 1 2 1 2 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 1 2 c , χ = col ( x ˜ , v ˜ , s ˜ , z ˜ ) . Obviously, Φ is positive definite, and we have b 1 x ˜ 2 V ( t ) λ m a x ( Φ ) χ 2 . The time derivative of V with respect to (24) is
V ˙ ( t ) = ( 1 b 1 ( k ( t ) + 3 ) ) v ˜ T v ˜ + ( 1 b 1 ( k ( t ) + 1 ) ) x ˜ T v ˜ + ( k ( t ) + 1 ) ( b 2 x ˜ T h b 2 v ˜ T h ) + ( k ( t ) + 1 ) s ˜ 2 ( c 1 ) s ˜ T L s ˜ c z ˜ T L z ˜ s ˜ T z ˜ + s ˜ T φ ˜ ( x ) + z ˜ T φ ˜ ( x ) .
According to the value range of b 1 and k ( t ) , we get
( 1 b 1 ( k ( t ) + 3 ) ) v ˜ T v ˜ b 1 ( k ( t ) + 2 ) v ˜ 2 ,
( 1 b 1 ( k ( t ) + 1 ) ) x ˜ T v ˜ ( b 1 ( k ( t ) + 1 ) + 1 ) x ˜ v ˜ .
Note that G ( x , s ) is ω -strongly monotone and μ -Lipschitz continuous, so one can derive that
x ˜ T h = x ˜ T ( G ( x , s ) G ( x , s * ) + G ( x , s * ) G ( x * , s * ) ) ω x ˜ 2 + μ x ˜ s ˜ ,
v ˜ T h = v ˜ T ( G ( x , s ) G ( x , s * ) + G ( x , s * ) G ( x * , s * ) ) μ ( v ˜ x ˜ + v ˜ s ˜ ) .
By Lemma 2, we have the following inequalities:
c z ˜ T L z ˜ c λ 2 ( L ) z ˜ 2 ,
s ˜ 2 ( c 1 ) s ˜ T L s ˜ ( ( c 1 ) λ 2 ( L ) + 1 ) s ˜ 2 .
It follows from Young’s inequality x T y p 2 x 2 + 1 2 p y 2 for p > 0 , and from the fact that φ ( x ) is l-Lipschitz continuous, that
s ˜ T z ˜ 1 2 s ˜ 2 + 1 2 z ˜ 2 ,
s ˜ T φ ˜ ( x ) 1 2 s ˜ 2 + l 2 2 x ˜ 2 ,
z ˜ T φ ˜ ( x ) 1 2 z ˜ 2 + l 2 2 x ˜ 2 .
Substituting the above inequalities into (26) yields
V ˙ ( t ) ( k ( t ) + 1 ) ( b 2 ω l 2 ) x ˜ 2 ( k ( t ) + 1 ) b 1 v ˜ 2 b 1 v ˜ 2 + ( k ( t ) + 1 ) ( b 1 + b 2 μ ) + 1 x ˜ v ˜ ( k ( t ) + 1 ) ( c 1 ) λ 2 ( L ) s ˜ 2 + c λ 2 ( L ) 1 z ˜ 2 + ( k ( t ) + 1 ) b 2 μ x ˜ s ˜ + b 2 μ v ˜ s ˜ .
Since b 2 b 1 max ( 2 ( μ + 1 ) ω , 1 ) , then ( b 1 + b 2 μ ) + 1 < b 2 ( μ + 1 ) + 1 , and Q 1 = b 1 ( k ( t ) + 1 ) b 2 ( μ + 1 ) 1 ( k ( t ) + 1 ) b 2 ω 2 is negative definite. From (31), one can obtain that
V ˙ ( t ) ( k ( t ) + 1 ) b 2 ω 2 l 2 x ˜ 2 + b 1 v ˜ 2 + ( c 1 ) λ 2 ( L ) s ˜ 2 + c λ 2 ( L ) 1 z ˜ 2 b 2 μ ( x ˜ s ˜ + v ˜ s ˜ ) + x ˜ v ˜ T b 1 ( k ( t ) + 1 ) b 2 ( μ + 1 ) 1 ( k ( t ) + 1 ) b 2 ω 2 x ˜ v ˜ ( k ( t ) + 1 ) χ T Θ χ
where Θ = diag ( Θ 1 , Θ 2 ) , Θ 2 = c λ 2 ( L ) 1 , and Θ 1 = b 2 ω 2 l 2 0 b 2 μ 2 0 b 1 b 2 μ 2 b 2 μ 2 b 2 μ 2 ( c 1 ) λ 2 ( L ) . From the definitions of b 1 , b 2 , and c, Θ is positive definite, and we have
V ˙ ( t ) ( k ( t ) + 1 ) λ m i n ( Θ ) χ 2 k ( t ) λ m i n ( Θ ) χ 2 λ m i n ( Θ ) λ m a x ( Φ ) k ( t ) V ( t )
where λ m i n ( Θ ) denotes the smallest eigenvalue of Θ . From Lemma 6 and the comparison principle, we can deduce that
lim t t f V ( t ) = κ 1 + κ λ m i n ( Θ ) λ m a x ( Φ ) V ( 0 ) .
Moreover, since b 1 x ˜ 2 V ( t ) , we can obtain
lim t t f x ( t ) x * = 1 b 1 κ 1 + κ λ m i n ( Θ ) λ m a x ( Φ ) V ( 0 ) .
For t t f , k ( t ) = 0 , we have V ˙ ( t ) λ m i n ( Θ ) χ 2 λ m i n ( Θ ) λ m a x ( Φ ) V ( t ) . We can derive that
V ( t ) exp λ m i n ( Θ ) λ m a x ( Φ ) ( t t f ) V ( t f ) x ( t ) x * 1 b 1 V ( t f ) exp λ m i n ( Θ ) 2 λ m a x ( Φ ) t t f .
Combined with Lemma 7, the above analysis implies that the decision variables x exponentially converge to the NE of the game (3).   □
Remark 8.
While resilient consensus methods like [27,28,29] detect malicious agents by leveraging state consensus objectives (i.e., lim t x i ( t ) x j ( t ) = 0 ), our work pioneers a fundamentally different approach tailored to non-consensus aggregative games, in which players optimize individual cost functions. Key innovations include the following:
(1) 
Problem Transformation: We introduce fixed-time stabilization to generate a detectable signal (specifically, the constancy of φ i ( x i ) for normal agents versus its variability for malicious agents), thereby enabling the adaptation of observer-based detection to game-theoretic settings that lack inherent consensus mechanisms.
(2) 
Time-Guaranteed Architecture: Unlike the asymptotic and finite-time detection methods in [27,28,29], our MPDD algorithm guarantees* fixed-time isolation within T d e f = m a x ( T 1 , T 2 ) , while the TBG-based NE seeking achieves predefined-time convergence to equilibrium, which is a capability absent in prior game-theoretic works.
(3) 
Unified Security-Game Framework: We unify malicious player mitigation and game-theoretic optimization into a single protocol with dual time guarantees, addressing security and performance objectives simultaneously. This framework diverges fundamentally from neighbor-value exclusion (e.g., MSR) or robust connectivity methods.

4. Numerical Example

In this section, a numerical example is given to verify the effectiveness of the proposed algorithms. A competition scenario in the electricity market involving six generation systems is considered, where the communication topology is modeled by a connected undirected graph, as depicted in Figure 2.
Each generation system has the following cost function:
J i ( x i , x i ) = f i ( x i ) ( p 0 a 0 N σ ) x i , i V
where x i and f i ( x i ) are the output electrical power and the generation cost of system i, respectively. The generation cost is usually approximated by a quadratic function: f i ( x i ) = α i + β i x i + γ i x i 2 , where α i , β i , and γ i are cost coefficients. The term σ ( x ) = 1 N i = 1 N x i is the aggregate function, and p 0 and a 0 are constants. Let p 0 = 200 , a 0 = 0.03 , and the other parameters are listed in Table 1.
The system matrix of the virtual system is given by
A = 2.25 9 0 1 1 1 0 18 0 , B = 1 0 0 , C = 1 0 0 0 1 0 , D = 1 0 .
The observer gain matrices are chosen as
K 1 = 1 0 1 1 0 17 , K 2 = 2 0 1 3 0 14 .
The malicious systems, 4 and 6, are described as
x ˙ 4 = v 4 v ˙ 4 = u 4 + g 4 . , x ˙ 6 = v 6 v ˙ 6 = u 6 + g 6
where g 4 and g 6 represent the effects of malicious attacks, defined as g 4 = 10 ( sin ( x 4 + 5 ) + x 6 sin ( x 4 ) + sin ( t ) + d ( t ) ) and g 6 = 10 ( cos ( 10 x 6 ) + 0.01 x 6 + d ( t ) ) , with d ( t ) denoting white noise with power 1.
For Equation (7), choose parameters α = 1 , β = 2.25 , p = 0.5 , and q = 2 such that T 1 T m a x = 3   s . For the distributed fixed-time observer, let the time delay T 2 = 4   s . The simulation results corresponding to Lemmas 4 and 5 are shown in Figure 3 and Figure 4. As seen in Figure 3b, the velocity state of the normal system converges to 0 within the fixed time t = 3   s , whereas the velocity states of the malicious systems do not converge due to the malicious attacks. Figure 4 shows the estimation error x ^ i j = [ x ^ i j 1 x ^ i j 2 x ^ i j 3 ] T R 3 , i V , j N i . For t 4   s , we observe the following: x ^ 1 j 0 , j = { 4 , 6 } ; x ^ 26 0 ; x ^ 34 0 ; x ^ 4 j 0 , j = { 1 , 3 , 5 } ; x ^ 5 j 0 , j = { 4 , 6 } ; and x ^ 6 j 0 , j = { 1 , 2 , 5 } . Therefore, it can be deduced that systems 4 and 6, or their neighbors, are malicious. To isolate the malicious influence, we just need to disconnect systems 4, 6, and their immediate neighbors from the normal systems.
Assume that all malicious systems have been disconnected from the normal systems for t 5 s. The predefined-time distributed NE-seeking algorithm is then implemented. Let the initialization time be t 0 = 5 s and the predefined time be t f = 10 s. The TGB is designed as
η ( t ) = 0 , 0 < t < t 0 10 ( t t 0 ) 6 5 6 24 ( t t 0 ) 5 5 5 + 15 ( t t 0 ) 4 5 4 , t 0 t t f 1 , t > t f .
Before the malicious systems are disconnected, the NE solution of the six systems can be manually calculated as x * = col ( 897.619 , 791.821 , 832.958 , 852.645 , 810.969 , 875.112 ) . After the malicious systems are disconnected, i.e., for t 5   s , the new NE solution for the remaining four systems is calculated as x * = col ( 906.585 , 799.451 , 841.092 , 818.853 ) . Let κ = 10 3 , b 1 = 3 , b 2 = 200 , and c = 100 . The evolution of the generation system outputs is shown in Figure 5. As illustrated in Figure 5a, the outputs of all systems cannot converge to the NE due to the existence of malicious systems. Figure 5b shows that the outputs of the normal systems eventually converge to the new NE at the predefined time t f = 10   s under the MPDD algorithm and (16). For comparison, Figure 5c displays the outputs under Equation (16) without the TBG, i.e., with k ( t ) = 0 , where it can be seen that the convergence time exceeds the predefined time. In summary, the simulation results verify the effectiveness of the proposed algorithms.

5. Conclusions

This paper investigated a predefined-time distributed aggregative game for second-order MASs with malicious players. A novel malicious player detection and disconnection algorithm was proposed, incorporating a fixed-time stabilization technique and a distributed fixed-time observer, which ensures that all malicious players are disconnected from normal players within a predefined time. Subsequently, to ensure that the decisions of normal players converge to the new NE, a predefined-time distributed NE-seeking algorithm was introduced, combining the TBG scheme with a state-feedback approach. The convergence properties of the proposed method were analyzed using Lyapunov stability theory. Numerical simulations demonstrated the effectiveness of the overall framework.
Inspired by [30,31,39], integrating event-triggered schemes (ETSs) into our framework is a critical next step. Potential enhancements include (i) a deception-aware ETS for the MPDD observer to reduce communication overhead during detection; (ii) dynamic triggering rules aligned with the TBG’s time-varying gain to minimize gradient exchanges; and (iii) co-design of security thresholds and triggering conditions to balance resilience with resource efficiency. Current limitations include the following: (1) the proposed method assumes a static communication topology and thus requires further extension to dynamic networks with mobile agents; and (2) it relies on global knowledge of monotonicity and Lipschitz constants for parameter tuning, limiting its plug-and-play applicability in unknown environments. Future work will address these issues by developing more flexible communication schemes and designing fully distributed Nash equilibrium-seeking algorithms that do not rely on global information.

Author Contributions

Conceptualization, X.H. and Z.Z.; methodology, X.H.; software, H.F.; validation, X.H. and Z.C.; investigation, Z.Z.; data curation, H.F.; writing—original draft preparation, X.H.; writing—review and editing, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant U21A20518 and U23A20341, in part by the Hunan Key Research and Development Program under Grant 2024JK2057.

Data Availability Statement

All data generated or analyzed during this study are included in this article.

Conflicts of Interest

Author Zhengchao Zeng and Haolong Fu were employed by the company Hunan Vanguard Group Corporation Limited. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Liu, Z.; Wu, Q.; Huang, S.; Wang, L.; Shahidehpour, M.; Xue, Y. Optimal Day-Ahead Charging Scheduling of Electric Vehicles Through an Aggregative Game Model. IEEE Trans. Smart Grid 2018, 9, 5173–5184. [Google Scholar] [CrossRef]
  2. Chen, Y.; Yi, P. Multi-Cluster Aggregative Games: A Linearly Convergent Nash Equilibrium Seeking Algorithm and Its Applications in Energy Management. IEEE Trans. Netw. Sci. Eng. 2024, 11, 2797–2809. [Google Scholar] [CrossRef]
  3. Wu, J.; Chen, Q.; Jiang, H.; Wang, H.; Xie, Y.; Xu, W.; Zhou, P.; Xu, Z.; Chen, L.; Li, B.; et al. Joint Power and Coverage Control of Massive UAVs in Post-Disaster Emergency Networks: An Aggregative Game-Theoretic Learning Approach. IEEE Trans. Netw. Sci. Eng. 2024, 11, 3782–3799. [Google Scholar] [CrossRef]
  4. Grammatico, S.; Parise, F.; Colombino, M.; Lygeros, J. Decentralized Convergence to Nash Equilibria in Constrained Deterministic Mean Field Control. IEEE Trans. Autom. Control 2016, 61, 3315–3329. [Google Scholar] [CrossRef]
  5. Belgioioso, G.; Grammatico, S. Semi-Decentralized Generalized Nash Equilibrium Seeking in Monotone Aggregative Games. IEEE Trans. Autom. Control 2023, 68, 140–155. [Google Scholar] [CrossRef]
  6. Zhu, Y.; Yu, W.; Wen, G.; Chen, G. Distributed Nash Equilibrium Seeking in an Aggregative Game on a Directed Graph. IEEE Trans. Autom. Control 2021, 66, 2746–2753. [Google Scholar] [CrossRef]
  7. Liang, S.; Yi, P.; Hong, Y. Distributed Nash equilibrium seeking for aggregative games with coupled constraints. Automatica 2017, 85, 179–185. [Google Scholar] [CrossRef]
  8. Gadjov, D.; Pavel, L. Single-Timescale Distributed GNE Seeking for Aggregative Games Over Networks via Forward-Backward Operator Splitting. IEEE Trans. Autom. Control 2021, 66, 3259–3266. [Google Scholar] [CrossRef]
  9. Liu, J.; Liao, X.; Dong, J.S.; Mansoori, A. Continuous-Time Distributed Generalized Nash Equilibrium Seeking in Nonsmooth Fuzzy Aggregative Games. IEEE Trans. Control Netw. Syst. 2024, 11, 1262–1274. [Google Scholar] [CrossRef]
  10. Liang, J.; Miao, H.; Li, K.; Tan, J.; Wang, X.; Luo, R.; Jiang, Y. A Review of Multi-Agent Reinforcement Learning Algorithms. Electronics 2025, 14, 820. [Google Scholar] [CrossRef]
  11. Zeng, W.; Yan, X.; Mo, F.; Zhang, Z.; Li, S.; Wang, P.; Wang, C. Knowledge-Enhanced Deep Reinforcement Learning for Multi-Agent Game. Electronics 2025, 14, 1347. [Google Scholar] [CrossRef]
  12. Deng, Z. Distributed Nash equilibrium seeking for aggregative games with second-order nonlinear players. Automatica 2022, 135, 109980. [Google Scholar] [CrossRef]
  13. Zhang, L.; Guo, G. Distributed Optimization for Aggregative Games Based on Euler-Lagrange Systems With Large Delay Constraints. IEEE Access 2020, 8, 179272–179280. [Google Scholar] [CrossRef]
  14. Huang, Y.; Meng, Z.; Sun, J. Distributed Nash Equilibrium Seeking for Multicluster Aggregative Game of Euler–Lagrange Systems With Coupled Constraints. IEEE Trans. Cybern. 2024, 54, 5672–5683. [Google Scholar] [CrossRef] [PubMed]
  15. Cai, X.; Xiao, F.; Wei, B.; Yu, M.; Fang, F. Nash Equilibrium Seeking for General Linear Systems With Disturbance Rejection. IEEE Trans. Cybern. 2023, 53, 5240–5249. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, Y.; Liang, S.; Wang, X.; Ji, H. Distributed Nash Equilibrium Seeking for Aggregative Games With Nonlinear Dynamics Under External Disturbances. IEEE Trans. Cybern. 2020, 50, 4876–4885. [Google Scholar] [CrossRef]
  17. Huang, B.; Zou, Y.; Meng, Z. Distributed-Observer-Based Nash Equilibrium Seeking Algorithm for Quadratic Games With Nonlinear Dynamics. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 7260–7268. [Google Scholar] [CrossRef]
  18. Ye, M. Distributed Nash Equilibrium Seeking for Games in Systems With Bounded Control Inputs. IEEE Trans. Autom. Control 2021, 66, 3833–3839. [Google Scholar] [CrossRef]
  19. Shi, X.; Su, Y.; Huang, D.; Sun, C. Distributed Aggregative Game for Multi-Agent Systems With Heterogeneous Integrator Dynamics. IEEE Trans. Circuits Syst. II Express Br. 2024, 71, 2169–2173. [Google Scholar] [CrossRef]
  20. Ai, X.; Wang, L. Distributed adaptive Nash equilibrium seeking and disturbance rejection for noncooperative games of high-order nonlinear systems with input saturation and input delay. Int. J. Robust Nonlinear Control 2021, 31, 2827–2846. [Google Scholar] [CrossRef]
  21. Tan, S.; Wang, Y.; Vasilakos, A.V. Distributed Population Dynamics for Searching Generalized Nash Equilibria of Population Games With Graphical Strategy Interactions. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 3263–3272. [Google Scholar] [CrossRef]
  22. Li, S.; Nian, X.; Deng, Z. Distributed resource allocation of second-order multiagent systems with exogenous disturbances. Int. J. Robust Nonlinear Control 2020, 30, 1298–1310. [Google Scholar] [CrossRef]
  23. Huang, B.; Zou, Y.; Meng, Z.; Ren, W. Distributed Time-Varying Convex Optimization for a Class of Nonlinear Multiagent Systems. IEEE Trans. Autom. Control 2020, 65, 801–808. [Google Scholar] [CrossRef]
  24. Liu, H.; Cheng, H.; Zhang, Y. Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions. Electronics 2025, 14, 1359. [Google Scholar]
  25. Du, Y.; Wang, Y.; Zuo, Z.; Zhang, W. Stochastic bipartite consensus with measurement noises and antagonistic information. J. Frankl. Inst. 2021, 358, 7761–7785. [Google Scholar] [CrossRef]
  26. Zhai, Y.; Liu, Z.W.; Guan, Z.H.; Wen, G. Resilient Consensus of Multi-Agent Systems With Switching Topologies: A Trusted-Region-Based Sliding-Window Weighted Approach. IEEE Trans. Circuits Syst. II Express Br. 2021, 68, 2448–2452. [Google Scholar] [CrossRef]
  27. Zhai, Y.; Liu, Z.W.; Guan, Z.H.; Gao, Z. Resilient Delayed Impulsive Control for Consensus of Multiagent Networks Subject to Malicious Agents. IEEE Trans. Cybern. 2022, 52, 7196–7205. [Google Scholar] [CrossRef]
  28. Zhao, D.; Lv, Y.; Yu, X.; Wen, G.; Chen, G. Resilient Consensus of Higher Order Multiagent Networks: An Attack Isolation-Based Approach. IEEE Trans. Autom. Control 2022, 67, 1001–1007. [Google Scholar] [CrossRef]
  29. Zhou, J.; Lv, Y.; Wen, G.; Yu, X. Resilient Consensus of Multiagent Systems Under Malicious Attacks: Appointed-Time Observer-Based Approach. IEEE Trans. Cybern. 2022, 52, 10187–10199. [Google Scholar] [CrossRef]
  30. Ge, X.; Han, Q.; Zhang, X.; Ding, D.; Ning, B. Distributed coordination control of multi-agent systems under intermittent sampling and communication: A comprehensive survey. Sci. China Inf. Sci. 2025, 68, 151201. [Google Scholar] [CrossRef]
  31. Zhang, X.; Han, Q.; Ge, X.; Ding, D.; Ning, B.; Zhang, B. An overview of recent advances in event-triggered control. Sci. China Inf. Sci. 2025, 68, 161201. [Google Scholar] [CrossRef]
  32. Belgioioso, G.; Nedić, A.; Grammatico, S. Distributed Generalized Nash Equilibrium Seeking in Aggregative Games on Time-Varying Networks. IEEE Trans. Autom. Control 2021, 66, 2061–2075. [Google Scholar] [CrossRef]
  33. Godsil, C.; Royle, G. Algebraic Graph Theory; Springer: New York, NY, USA, 2001. [Google Scholar]
  34. Polyakov, A. Nonlinear Feedback Design for Fixed-Time Stabilization of Linear Control Systems. IEEE Trans. Autom. Control 2012, 57, 2106–2110. [Google Scholar] [CrossRef]
  35. Lv, Y.; Wen, G.; Huang, T. Adaptive Protocol Design For Distributed Tracking With Relative Output Information: A Distributed Fixed-Time Observer Approach. IEEE Trans. Control Netw. Syst. 2020, 7, 118–128. [Google Scholar] [CrossRef]
  36. Guo, Z.; Chen, G. Predefined-Time Distributed Optimal Allocation of Resources: A Time-Base Generator Scheme. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 438–447. [Google Scholar] [CrossRef]
  37. Li, S.; Nian, X.; Deng, Z.; Chen, Z. Predefined-time distributed optimization of general linear multi-agent systems. Inf. Sci. 2022, 584, 111–125. [Google Scholar] [CrossRef]
  38. Strang, G. The fundamental theorem of linear algebra. Am. Math. Mon. 1993, 100, 848–855. [Google Scholar] [CrossRef]
  39. Kazemy, A.; Lam, J.; Zhang, X.M. Event-Triggered Output Feedback Synchronization of Master–Slave Neural Networks Under Deception Attacks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 952–961. [Google Scholar] [CrossRef]
Figure 1. Execution process of NE-seeking strategy with malicious players.
Figure 1. Execution process of NE-seeking strategy with malicious players.
Electronics 14 02998 g001
Figure 2. Communication graph. The green nodes represent normal systems, and the red nodes represent malicious systems.
Figure 2. Communication graph. The green nodes represent normal systems, and the red nodes represent malicious systems.
Electronics 14 02998 g002
Figure 3. (a) Influence of malicious attacks. (b) Evolution of velocity states under Equation (7).
Figure 3. (a) Influence of malicious attacks. (b) Evolution of velocity states under Equation (7).
Electronics 14 02998 g003
Figure 4. Estimation error based on the observer (13) and Algorithm 1, where T 1 3 s and T 2 = 4 s. (a) Estimation error by Agent 1, (b) Estimation error by Agent 2, (c) Estimation error by Agent 3, (d) Estimation error by Agent 4, (e) Estimation error by Agent 5, (f) Estimation error by Agent 6.
Figure 4. Estimation error based on the observer (13) and Algorithm 1, where T 1 3 s and T 2 = 4 s. (a) Estimation error by Agent 1, (b) Estimation error by Agent 2, (c) Estimation error by Agent 3, (d) Estimation error by Agent 4, (e) Estimation error by Agent 5, (f) Estimation error by Agent 6.
Electronics 14 02998 g004
Figure 5. (a) Evolution of generation system outputs under Equation (16) with malicious systems. (b) Evolution of generation system outputs under the MPDD algorithm (16). (c) Evolution of generation system outputs under the MPDD algorithm (16) without considering the TBG.
Figure 5. (a) Evolution of generation system outputs under Equation (16) with malicious systems. (b) Evolution of generation system outputs under the MPDD algorithm (16). (c) Evolution of generation system outputs under the MPDD algorithm (16) without considering the TBG.
Electronics 14 02998 g005
Table 1. System parameters.
Table 1. System parameters.
Generator α i β i γ i x i ( 0 ) v i ( 0 ) s i ( 0 )
#121311.6690.00533100050560
#220010.3330.00889950100720
#324010.8330.0074190060650
#423011.0250.0067885080460
#522510.6670.0081280040610
#623411.3240.0060575075760
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, X.; Zeng, Z.; Fu, H.; Chen, Z. A Fixed-Time Convergence Method for Solving Aggregative Games with Malicious Players. Electronics 2025, 14, 2998. https://doi.org/10.3390/electronics14152998

AMA Style

He X, Zeng Z, Fu H, Chen Z. A Fixed-Time Convergence Method for Solving Aggregative Games with Malicious Players. Electronics. 2025; 14(15):2998. https://doi.org/10.3390/electronics14152998

Chicago/Turabian Style

He, Xuan, Zhengchao Zeng, Haolong Fu, and Zhao Chen. 2025. "A Fixed-Time Convergence Method for Solving Aggregative Games with Malicious Players" Electronics 14, no. 15: 2998. https://doi.org/10.3390/electronics14152998

APA Style

He, X., Zeng, Z., Fu, H., & Chen, Z. (2025). A Fixed-Time Convergence Method for Solving Aggregative Games with Malicious Players. Electronics, 14(15), 2998. https://doi.org/10.3390/electronics14152998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop