Next Article in Journal
An Inverse Source Problem for the Generalized Subdiffusion Equation with Nonclassical Boundary Conditions
Next Article in Special Issue
Impulsive Fractional Cohen-Grossberg Neural Networks: Almost Periodicity Analysis
Previous Article in Journal
Reduced Multiplicative (BURA-MR) and Additive (BURA-AR) Best Uniform Rational Approximation Methods and Algorithms for Fractional Elliptic Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Alternating Inertial and Overrelaxed Algorithms for Distributed Generalized Nash Equilibrium Seeking in Multi-Player Games

1
Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, School of Mathematics, Southeast University, Nanjing 211189, China
2
Yonsei Frontier Lab, Yonsei University, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Fractal Fract. 2021, 5(3), 62; https://doi.org/10.3390/fractalfract5030062
Submission received: 16 May 2021 / Revised: 19 June 2021 / Accepted: 23 June 2021 / Published: 28 June 2021
(This article belongs to the Special Issue Frontiers in Fractional-Order Neural Networks)

Abstract

:
This paper investigates the distributed computation issue of generalized Nash equilibrium (GNE) in a multi-player game with shared coupling constraints. Two kinds of relatively fast distributed algorithms are constructed with alternating inertia and overrelaxation in the partial-decision information setting. We prove their convergence to GNE with fixed step-sizes by resorting to the operator splitting technique under the assumptions of Lipschitz continuity of the extended pseudo-gradient mappings. Finally, one numerical simulation is given to illustrate the efficiency and performance of the algorithm.

1. Introduction

Game theory is the study of mathematical models for describing competition and cooperation interaction among intelligent rational decision-makers [1]. In the past few years, networked games have received increasing attention due to their wide applications in different areas such as competitive economy [2], power allocation in interference channel models [3,4], environmental pollution control [5], cloud computing [6], wireless communication [7,8,9], and adversarial classification [10,11].
The Nash equilibrium (NE) is a set of strategies where each player’s choice is its best response to the choices of the other players of the game [12]. An NE in games with shared coupling constraints is referred to as generalized Nash equilibrium (GNE) [13]. In order to compute the GNE, a great number of algorithms have been proposed [14,15,16,17,18], most of which depend on full-decision information, i.e., each player is assumed to have full access to all of the other players’ actions. However, such an assumption could be impractical in large-scale distributed networks [19,20]. To overcome this shortcoming, fully distributed algorithms under the partial-decision information setting have recently become a research topic that attracts recurring interest.
Under the partial-decision information setting, each player can communicate only with its neighbors (instead of all its opponents) via a certain communication graph. In this case, the player has no direct access to some necessary decision information involving its cost function. In order to make up for the missing information, the player estimates other players’ actions and exchanges its estimates with neighbors. Such an estimate would tend to be the real actions of players by designing an appropriate consensus protocol [21]. So far, some efforts have been devoted to the GNE seeking problem with partial-decision information. For example, an adaption of a fictitious play algorithm for large-scale games is introduced in [22], and information exchange techniques for aggregative games are studied in [23]. An operator-theoretic approach has been introduced to analyze GNE problems [16,21], under which the problem is cast as finding a zero of a sum of monotone operators through primal-dual analysis and show its convergence by reformulating it as a forward–backward fixed-point iteration.
Compared with the existing distributed algorithms for diminishing steps [24], the algorithm for fixed steps has the potential to exhibit a faster convergence [16]. Very recently, some distributed proximal algorithms and project-gradient algorithms have been proposed for seeking the GNE with fixed steps [16,25,26,27,28]. It is worth noting that most of the existing algorithms, under the partial-decision information setting, require that the extended pseudo-gradient mapping in the augmented space of actions and estimates is strictly/strongly monotone. Such an assumption seems strong and how to relax it becomes a technical difficulty. In this paper, we would like to investigate the GNE seeking algorithm under a mild assumption of the extended pseudo-gradient mapping, like [21].
In addition, some refined GNE seeking algorithms with inertia and relaxation have been proposed in ([16], [Alg. 6.1]), ([29], [Alg. 2]) and ([25], [Alg. 3]) to accelerate the convergence to GNE. Although the fast convergence of the mentioned algorithms has been validated numerically, more computation resources are inevitably required at each iteration. Note that the computation resources could be limited and expensive in many situations. Inspired by the above discussion, in this paper, we combine a projection based algorithm via a doubly augmented operator splitting from the work [21] with the inertia/overrelaxation idea from the paper [25]. Specifically, we design distributed GNE seeking algorithms to balance the convergence rate and computation consumption in games with shared coupling constraints under a partial-decision information setting. Two kinds of fully distributed algorithms, i.e., alternating inertial algorithms and alternating overrelaxed algorithms, are proposed with fixed step-sizes. Their convergence to the GNE are guaranteed under a mild assumption on the extended pseudo-gradient mappings, compared to [26], by using the Karush–Kuhn–Tucker (KKT) conditions of an optimization problem and variational inequality. Finally, a numerical example is provided to show the effectiveness of our algorithms that are validated numerically to have a relatively fast convergence rate.
The remainder of the paper is organized as follows. In Section 2, we introduce some notations and backgroud theory. Section 3 describes the problem that we are interested in, formulates it into mathematical model, and rewrites the game into a problem of finding the solution of the stochastic variational inequality (SVI). In Section 4, we propose two alternating fully distributed GNE seeking algorithms under a partial-decision information setting and assumptions to guarantee convergence; the convergence analysis is also presented in this section. We present numerical results in Section 5 and finally conclude in Section 6.
Notations: Let R m ( R + m ) represent an m-dimensional (non-negative) Euclidean space. 0 n R n is an n dimensional vector with all elements equal to 0 , and I m R m × m is the identity matrix with m × m dimension. 1 N denotes the N-dimension column vector with all elements equal to 1. We denote Ω 1 × × Ω n or i = 1 n Ω i as the Cartesian product of the sets Ω i , i = 1 , , n . For given n column vectors x 1 , , x n , col ( x 1 , , x n ) = [ x 1 , , x n ] . Let [ x ] k denote the k-th element in column vector x, let x , y = x y denote the inner product of x , y , and x = x x denotes the norm induced by the inner product · , · . Φ 0 stands for a symmetric positive definite matrix. Similarly, the Φ -induced product is x , y Φ = Φ x , y , and the Φ -induced norm is x Φ = Φ x , x . ⊗ is the Kronecker product, and diag ( A 1 , , A n ) denotes the block diagonal matrix with A 1 , , A N on its diagonal. Suppose A R m × n , then A = max { k = 1 n | [ A i ] 1 k | , , k = 1 n | [ A i ] m k | } , where [ A i ] j k denotes the element of A i in the j-th row and k-th column.

2. Preliminary

2.1. Operator Theory

The following concepts are reviewed from [30]. Let A : R m 2 R m be a set-valued operator. Denote I d as the identity operator, i.e., I d ( x ) = x . The graph of A is g r a A = { ( x , u ) R m × R m | u A x } . The zero set of operator A is z e r A = { x R m | 0 A x } . Define the resolvent of operator A as R A = ( I d + A ) 1 . An operator A is called monotone if ( x , u ) , ( y , v ) g r a A , we have x y , u v 0 . Moreover, it is maximally monotone if g r a A is not strictly contained in the graph of any other monotone operator, i.e., for every ( x , u ) R m × R m , ( x , u ) g r a A ( y , v ) g r a A , x y , u v 0 . A is nonexpansive if it is Lipschitz continuous with constant 1, i.e., x , y R m , A x A y x y , and is firmly nonexpansive if A x A y 2 + ( I d A ) x ( I d A ) y 2 x y 2 . The operator A is α -averaged with the constant α [ 0 , 1 ] , denoted by A A ( α ) , if x , y R m , A x A y 2 x y 2 ( 1 α ) / α ( I d A ) x ( I d A ) y 2 . We can easily derive that if A is averaged then it is nonexpansive, and A is firmly nonexpansive if and only if it is 1/2-averaged. A is β -cocoercive for β > 0 , if x , y R m , β A x A y 2 x y , A x A y . The normal cone operator of the set Ω is defined as
N Ω ( x ) = x Ω { v | v , y x 0 , y Ω } x b d ( Ω ) 0 x i n t ( Ω ) .
Let the projection of x onto Ω be P Ω ( x ) = arg min y Ω x y , and P Ω ( x ) = R N Ω ( x ) = ( I d + N Ω ) 1 ( x ) .

2.2. Graph Theory

Let the graph G = ( N , E ) describe the information exchanged among agents, where N : = { 1 , , N } is the set of players and E N × N is the edge set. If player i can obtain information from player j, then ( i , j ) E and j belong to player i’s neighbor set N i : = { j | ( i , j ) E } . G is said to be undirected when ( i , j ) E if and only if ( j , i ) E . Let W : = [ w i j ] R N × N be the weighted adjacency matrix of G with w i j > 0 if j N i and w i j = 0 otherwise. Assume that W = W . The degree matrix is defined as D e g : = diag [ d 1 , , d N ] = diag [ j = 1 N w 1 j , , j = 1 N w N j ] , and the weighted Laplacian of graph G is L : = D e g W . If G is connected and undirected, then 0 is an eigenvalue of L, and the eigenvalues of L are 0 < s 2 ( L ) s N ( L ) in ascending order.

3. Game Formulation

In this section, we build a mathematical setup about the problem considered.
Consider a set of players N = { 1 , , N } , where every player i N controls its local decision variable x i Ω i R n i and Ω i is the private decision set of player i. Denote n : = i = 1 N n i and Ω : = Ω 1 × × Ω N R n , then the stacked vector of all the players’ decisions x : = col ( x i ) i N R n is called the decision profile. We also write x = ( x i , x i ) , where x i : = col ( x j ) j N / { i } = col ( x 1 , , x i 1 , x i + 1 , , x N ) denotes all of the decisions except player i’s.
The local objective function of each player i N is denoted by J i ( x i , x i ) , and the affine coupling constrained set is defined as
K : = i = 1 N Ω i { x R n | A x b }
where A : = [ A 1 , , A N ] R m × n , A i R m × n i and b : = i = 1 N b i R m . Here, A i and b i are the local data only accessible to player i . Define the feasible set of player i as K i ( x i ) : = { x i R n i | ( x i , x i ) K } , which implies that the feasible set of each player depends on the action of the other players. Every player aims to optimize its objective function, and the game can be represented by the inter-dependent optimization problems
i N : min x i R n i J i ( x i , x i ) s . t . x i K i ( x i ) .
Definition 1.
A GNE of game (3) is a collective strategy x * = col ( x i * ) i N such that for all i N
x i * arg min J i ( x i , x i * ) s . t . x i K i ( x i ) .
In order to deal with the coupling constraints and solve the problems, we define the Lagrange function of each player i N :
L i ( x i , λ i ; x i ) = J i ( x i , x i ) + λ i ( A x b )
where λ i R + m is a dual variable. According to optimization theory, if x i * is an optimal solution to (3), then there exists λ i * R n i such that the following KKT conditions are satisfied:
x i L i ( x i * , λ i * ; x i * ) = 0 n i λ i * , A x * b = 0 ( A x * b ) 0 λ i * 0 .
By using the normal cone operator, the KKT conditions (6) are equivalent to
0 n i x i J i ( x i * , x i * ) + A i λ i * + N Ω i ( x i * ) 0 m ( A x * b ) + N R + m ( λ i * ) .
Note that by the definition of a normal cone, one has N R + m ( λ i * ) = when λ i * R + m , which implies λ i * R + m (equivalently [ λ i * ] k 0 ) when (7) holds. Furthermore, N R + m = k = 1 m N R + , that is, if [ λ i * ] k = 0 , then N R + ( [ λ i * ] k ) = R + , and thus [ A x * b ] k 0 ; if [ λ i * ] k > 0 , then N R + ( [ λ i * ] k ) = 0 , and hence [ A x * b ] k = 0 . This result implies that A x * b 0 and λ i * , A x * b = 0 .
We consider the GNE with the same Lagrangian multipliers for every player, i.e., λ 1 * = λ 2 * = = λ N * = λ * , which is called variational GNE (v-GNE). The v-GNE x * is a solution of the following inequality V I ( F , K ) :
F ( x * ) , x x * 0 , x K
where F is the pseudo-gradient mapping of the game with the following form:
F ( x ) : = col ( x i J i ( x i , x i ) ) i N .
Assumption 1.
Given x i , J i ( x i , x i ) is continuously differentiable and convex in x i , and Ω i is nonempty, compact and convex for each player i, then K is nonempty and satisfies Slater’s constraint qualification.
Assumption 2.
F is μ-monotone and θ 0 -Lipschitz continuous, i.e., for any point x and x , x x , F ( x ) F ( x ) μ x x 2 and F ( x ) F ( x ) θ 0 x x .
It follows from ([15], [Theorem 4.8]) that x * solves V I ( F , K ) (8) if and only if there exists a λ * R m such that the KKT conditions are satisfied:
0 n F ( x * ) + A λ * + N Ω ( x * ) 0 m ( A x * b ) + N R + m ( λ * )
where N Ω ( x * ) = i = 1 N N Ω i ( x i * ) .
Assumption 1 guarantees the existence of the v-GNE for game (3) by ([31], [Corollary 2.2.5]). The goal of this paper is to design distributed algorithms for seeking the v-GNE under a partial-decision information setting, where both the computational cost and convergence rate are taken into consideration.

4. Alternating Distributed v-GNE Algorithms

In this section, we propose two kinds of distributed algorithms for seeking the v-GNE of game (3) with partial-decision information, where each player controls its own actions and exchanges information with its neighbors via the communication graph.
Remark 1.
Some GNE seeking algorithms with inertia and overrelaxation have been proposed [28,29]. Although the fast convergence of these algorithms has been validated numerically, more computation resources are inevitably required at each iteration. Note that the computation resources could be limited and expensive in many situations. Inspired by the above discussion, in this section we design distributed GNE seeking algorithms with alternated inertia and alternated overrelaxation, where both fast convergence rate and low computation consumption are taken into consideration.
Suppose that player i N controls its local decision x i R n i and λ i R + m (i.e., the estimation of λ * in (10)). In order to make up for the lack of non-neighbors’ information, we introduce an auxiliary variable x i for each player i that provides the estimation of the other players’ decisions. To be specific, x i = col ( x i j ) j N where x i j denotes the player i’s estimation of player j’s decision and x i i = x i . We can also rewrite x i = ( x i , x i i ) , where x i i represents player i’s estimation vector except its own decisions. In addition, an auxiliary variable z i R + m is introduced for each player i N . We assume that each player exchanges its local variable { x i , λ i , z i } with its neighbor through the communication graph G .
Assumption 3.
The communication graph G is undirected and connected.

4.1. Alternating Inertial Distributed v-GNE Seeking Algorithm

In this subsection, we propose an alternating inertial distributed algorithm for seeking the v-GNE, where the inertia is adopted intermittently (see Algorithm 1). Here, x i , k , x i , k i and z i , k , λ i , k denote x i , x i i , z i , λ i at iteration k , respectively, and x ˜ i , k , x ˜ i , k i , z ˜ i , k , λ ˜ i , k denote x ˜ i , x ˜ i i , z ˜ i , λ ˜ i at iteration k , respectively. ρ is the inertial parameter, c is the coupling parameter, and τ i , ν i , σ i are the fixed step-sizes of player i in the update step. P Ω i is the projection operator on to the set Ω i .
Let x : = col ( x i ) i N , z : = col ( z i ) i N and λ : = col ( λ i ) i N . Let x ˜ : = col ( x ˜ i ) i N with x ˜ i : = ( x ˜ i , x ˜ i i ) , z ˜ : = col ( z ˜ i ) i N and λ ˜ : = col ( λ ˜ i ) i N . In addition, A : = diag ( ( A i ) i N ) . L λ : = L I m , L x : = L I n , b : = col ( b i ) i N , τ 1 : = diag ( ( τ i 1 I n ) i N ) , ν 1 : = diag ( ( ν i 1 I n ) i N ) and σ 1 : = diag ( ( σ i 1 I n ) i N ) .
The extended pseudo-gradient mapping F is defined as
F ( x ) : = c o l ( x i J i ( x i , x i i ) ) i N .
Let ϖ : = col ( x , z , λ ) Ω , where Ω : = R N n × R N m × R + N m , and we define operators A , B and matrix Φ as follows:
A : ϖ R N Ω ( R x ) 0 N R + N m ( λ ) + 0 0 R A 0 0 L λ A R L λ 0 ϖ B : ϖ R F ( x ) + c L x x 0 L λ λ + b
Φ : = τ 1 0 R A 0 ν 1 L λ A R L λ σ 1
where R : = diag ( ( R i ) i N ) with
R i : = 0 n i × n < i I n i 0 n i × n > i ,
n < i : = j < i n j and n > i : = j > i n j .
Algorithm 1 Distributed alternating inertial v-GNE seeking.
Initialization: x i , 0 Ω i , x i , 0 i R n n i , λ i , 0 R + m , z i , 0 R m
Acceleration: Set ρ k = 0 if k is even, ρ k = ρ if k is odd.
x ˜ i , k = x i , k + ρ k ( x i , k x i , k 1 ) x ˜ i , k i = x i , k i + ρ k ( x i , k i x i , k 1 i ) z ˜ i , k = z i , k + ρ k ( z i , k z i , k 1 ) λ ˜ i , k = λ i , k + ρ k ( λ i , k λ i , k 1 )
Update:
x i , k + 1 = P Ω i ( x ˜ i , k τ i ( x i J i ( x ˜ i , k , x ˜ i , k i ) + A i λ ˜ i , k + c j N i w i j ( x ˜ i , k x ˜ j , k i ) ) ) x i , k + 1 i = x ˜ i , k i τ i c j N i w i j ( x ˜ i , k i x ˜ j , k i ) z i , k + 1 = z ˜ i , k + ν i j N i w i j ( λ ˜ i , k λ ˜ j , k ) λ i , k + 1 = P R + m ( λ ˜ i , k + σ i ( A i ( 2 x i , k + 1 x ˜ i , k ) b i j N i w i j ( 2 ( z i , k + 1 z j , k + 1 ) ( z ˜ i , k z ˜ j , k ) ) j N i w i j ( λ ˜ i , k λ ˜ j , k ) ) )
Let ϖ k : = col ( x k , z k , ϖ k ) , ϖ ˜ k : = col ( x ˜ k , z ˜ k , λ ˜ k ) , where x k , z k , λ k , x ˜ k , z ˜ k , λ ˜ k denote x , z , λ , x ˜ , z ˜ , λ ˜ at iteration k , respectively. Suppose that Φ 0 and Φ 1 A is maximally monotone, then Algorithm 1 is equivalent to
ϖ k + 1 = T ( ϖ k ) , if k is even ϖ k + 1 = T ( ϖ k + ρ ( ϖ k ϖ k 1 ) ) , if k is odd
where Φ , A , B in (12)–(13), T : = T 2 T 1 , T 1 : = I d Φ 1 B , and T 2 : = ( I d + Φ 1 A ) 1 .
Lemma 1.
Suppose Φ 0 and Φ 1 A is maximally monotone, then any limit point ϖ ¯ = col ( x ¯ , z ¯ , λ ¯ ) of Algorithm 1 is a zero of A + B and a fixed point of T 2 T 1 .
Proof. 
By the continuity of the right hand of (15), ϖ ¯ = T ( ϖ ¯ ) . Since Φ is positive definite,
ϖ ¯ = T 2 T 1 ( ϖ ¯ ) : = ( I d + Φ 1 A ) 1 ( I d Φ 1 B ) ( ϖ ¯ ) ( I d + Φ 1 A ) ( ϖ ¯ ) ( I d Φ 1 B ) ( ϖ ¯ ) 0 Φ 1 ( A + B ) ( ϖ ¯ ) 0 ( A + B ) ( ϖ ¯ ) .
   □
In order to show the convergence of the algorithm, the following assumptions are introduced.
Assumption 4.
The extended pseudo-gradient mapping F in (11) is θ-Lipschitz continuous, i.e., there exists θ > 0 such that for any x and x , F ( x ) F ( x ) θ x x .
Let c min : = 1 s 2 ( L ) ( ( θ + θ 0 ) 2 4 μ + θ ) with μ , θ 0 in Assumption 2, and θ in Assumption 4. Let E x : = { x R N n | x = 1 N x , x R n } . It follows from ([21], [Lemma 4]) that if c is selected such that c > c min , then A is maximally monotone and B is β -restricted cocoercive, i.e., for any ϖ and any ϖ Ω E , where Ω E : = E x × R N m × R + N m ,
ϖ ϖ , B ϖ B ϖ β B ϖ B ϖ 2 ,
where 0 < β min { 1 2 d * , μ θ 2 } , and d * is the maximal weighted degree of G , i.e., d * = max { j = 1 N w 1 j , , j = 1 N w N j } .
Similar to [21], a mild assumption (Assumption 4) on the pseudo-gradient mapping F is required only, while the requirement of strong monotonicity is relaxed for F .
Theorem 1.
Suppose Assumptions 1 4 hold. Choose c > c min , δ > 1 2 β , and the step sizes τ i 1 A i + δ , ν i 1 2 d i + δ , and σ i 1 A i + 2 d i + δ . Then for any ρ [ 0 , 1 2 ) , the sequence { x k , z k , λ k } k N generated by Algorithm 1 converges to the equilibrium ( x * , z * , λ * ) , where x * = 1 N x * and x * is a v-GNE of the game (3).
Proof. 
It follows from the Gershgorin’s circle theorem ([32], [§6.8 Theorem 1]) that, given any δ > 0 , Φ is positive definite and Φ δ I n + 2 m N is positive semi-definite if the step sizes τ i 1 A i + δ and ν i 1 2 d i + δ .
Next, we first show the convergence of { ϖ 2 k } and then show the convergence of { ϖ k } .
By ([21], [Lemma 6]), we have T 2 A ( 1 2 ) and T 1 is 1 2 β δ -restricted averaged, i.e., for any ϖ and any ϖ Ω E ,
T 1 ϖ T 1 ϖ Φ 2 ϖ ϖ Φ 2 ( 2 β δ 1 ) ϖ ϖ ( T 1 ϖ T 1 ϖ ) Φ 2 .
It follows from ([30], [Proposition 4.32]) that T = T 2 T 1 is α -restricted averaged, with α = 2 3 when δ > 1 β . Let ϖ * be a fixed point of T, then ϖ * Ω E according to ([21], [Theorem 1]).
(i) For the subsequence { ϖ 2 k } , by (15), we have ϖ 2 ( k + 1 ) = T ( T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ) . Then, by T is α r e s t r i c t e d averaged, we obtain
ϖ 2 k + 2 ϖ * Φ 2 = T ( T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ) ϖ * Φ 2 T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ * Φ 2 1 α α T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 k + 2 Φ 2 .
By resorting to α x + ( 1 α ) y 2 + α ( 1 α ) x y 2 = α x 2 + ( 1 α ) y 2 ,
T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ * Φ 2 = ( 1 + ρ ) T ( ϖ 2 k ) ϖ * Φ 2 ρ ϖ 2 k ϖ * Φ 2 + ( 1 + ρ ) ρ T ( ϖ 2 k ) ϖ 2 k Φ 2 ,
and by using (17) again, (18) can be rewritten as
ϖ 2 k + 2 ϖ * Φ 2 ( 1 + ρ ) ϖ 2 k ϖ * Φ 2 ρ ϖ 2 k ϖ * Φ 2 ( 1 + ρ ) 1 α α T ( ϖ 2 k ) ϖ 2 k Φ 2 + ( 1 + ρ ) ρ T ( ϖ 2 k ) ϖ 2 k Φ 2 1 α α T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 k + 2 Φ 2 = ϖ 2 k ϖ * Φ 2 ( 1 + ρ ) 1 α α ρ T ( ϖ 2 k ) ϖ k Φ 2 1 α α T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 k + 2 Φ 2 .
Choose ρ 1 α α = 1 2 , then
ϖ 2 k + 2 ϖ * Φ 2 ϖ 2 k ϖ * Φ 2 1 α α T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 k + 2 Φ 2 .
This result implies that the sequence { ϖ 2 k ϖ * Φ 2 } is decreasing and non-negative, and thus converges. Moreover, we have
k = 0 T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 ( k + 1 ) Φ 2 <
and T ( ϖ 2 k ) + ρ ( T ( ϖ 2 k ) ϖ 2 k ) ϖ 2 ( k + 1 ) 0 . Note that since { ϖ 2 k } is bounded, then there exists a convergent subsequence { ϖ 2 n k } that converges to ϖ ˜ . Obviously,
ϖ 2 ( n k + 1 ) = T ( T ( ϖ 2 n k ) + ρ ( T ( ϖ 2 n k ) ϖ 2 n k ) ) .
Let k , we have ϖ ˜ = T ϖ ˜ , which implies that ϖ ˜ is a fixed point of T and thus { ϖ 2 k ϖ ˜ Φ 2 } converges. Since { ϖ 2 n k ϖ ˜ Φ 2 } converges to 0, { ϖ 2 k } converges to ϖ ˜ .
(ii) T is restricted nonexpansive since it is 2 3 -restricted averaged, and then one obtains
ϖ 2 k + 1 ϖ ˜ = T ϖ 2 k T ϖ ˜ ϖ 2 k ϖ ˜ ,
which implies that the odd subsequence { ϖ 2 k + 1 } also converges to ϖ ˜ , and thus { ϖ k } converges to ϖ ˜ . Note that Φ 0 and Φ 1 A is maximally monotone. ϖ ˜ is a fixed point of T , and hence is a zero of A + B by Lemma 1. It follows from ([21], [Theorem 1]) that given any ϖ ˜ : = col ( x * , z * , λ * ) z e r ( A + B ) , then x * = 1 N x * , and x * solves V I ( F , K ) (8), that is, x * is a v-GNE of game (3).    □

4.2. Alternating Overrelaxed Distributed v-GNE Seeking Algorithm

In this subsection, an alternating overrelaxed distributed algorithm is constructed for seeking the v-GNE, presented in Algorithm 2, and also that η is an overrelaxed parameter. Here the partial-decision information setting is considered.
Algorithm 2 Distributed alternating overrelaxed v-GNE seeking.
Initialization: x i , 0 Ω i , x i , 0 i R n n i , λ i , 0 R + m , z i , 0 R m
Update:
x ˜ i , k = P Ω i ( x i , k τ i ( x i J i ( x i , k , x i , k i ) + A i λ i , k + c j N i w i j ( x i , k x j , k i ) ) ) x ˜ i , k i = x i , k i τ i c j N i w i j ( x i , k i x j , k i ) z ˜ i , k = z i , k + ν i j N i w i j ( λ i , k λ j , k ) λ ˜ i , k = P R + m ( λ i , k + σ i ( A i ( 2 x ˜ i , k x i , k ) b i j N i w i j ( 2 ( z ˜ i , k z ˜ j , k ) ( z i , k z j , k ) ) j N i w i j ( λ i , k λ j , k ) ) )
Acceleration: Set η k = 1 if k is even, η k = η if k is odd.
x i , k + 1 = x ˜ i , k + ( η k 1 ) ( x ˜ i , k x i , k ) x i , k + 1 i = x ˜ i , k i + ( η k 1 ) ( x ˜ i , k i x i , k i ) z i , k + 1 = z ˜ i , k + ( η k 1 ) ( z ˜ i , k z i , k ) λ i , k + 1 = λ ˜ i , k + ( η k 1 ) ( λ ˜ i , k λ i , k )
Similar to (15), we suppose that Φ 0 and Φ 1 A is maximally monotone, then Algorithm 2 is equivalent to
ϖ k + 1 = T ( ϖ k ) if k is even ϖ k + 1 = T ( ϖ k ) + ( η 1 ) ( T ( ϖ k ) ϖ k ) if k is odd .
where ϖ k = col ( x k , z k , λ k ) and T is given in (15).
Next, we prove the convergence of Algorithm 2 to a v-GNE.
Theorem 2.
Suppose Assumptions 1 4 hold. Take any c > c min , δ > 1 2 β , and the step sizes τ i 1 A i + δ , ν i 1 2 d i + δ , and σ i 1 A i + 2 d i + δ . Then, for any η [ 1 , 3 2 ) , the sequence { x k , z k , λ k } k N generated by Algorithm 2 converges to the equilibrium ( x * , z * , λ * ) , where x * = 1 N x * and x * is a v-GNE of the game (3).
Proof. 
Similar to Theorem 1, we first show the convergence of { ϖ 2 k } , and then prove the convergence of { ϖ k } . Note that T = T 2 T 1 is α -restricted averaged with α = 2 3 when δ > 1 β . Let ϖ * be any fixed point of T .
First, we consider the subsequence { ϖ 2 k } , and according to (23) and (17), one has
ϖ k + 2 ϖ * Φ 2 η [ T ( ϖ k ) ϖ * Φ 2 1 α α T ( ϖ k ) T ( T ( ϖ k ) ) Φ 2 ] + ( 1 η ) [ ϖ k ϖ * Φ 2 1 α α ϖ k T ( ϖ k ) Φ 2 ] η ( 1 η ) T ( T ( ϖ k ) ) T ( ϖ k ) Φ 2 = ( 1 η ) ϖ k ϖ * Φ 2 + η T ( ϖ k ) ϖ * Φ 2 + [ η ( 1 α η ) ] T ( ϖ k ) T ( T ( ϖ k ) ) Φ 2 ( 1 η ) 1 α α T ( ϖ k ) ϖ k Φ 2 ϖ k ϖ * Φ 2 1 α α T ( ϖ k ) ϖ k Φ 2 + [ η ( 1 α η ) ] T ( ϖ k ) T ( T ( ϖ k ) ) Φ 2
where the first equality holds due to α x + ( 1 α ) y 2 + α ( 1 α ) x y 2 = α x 2 + ( 1 α ) y 2 . By choosing η 1 α , we have
ϖ k + 2 ϖ * Φ 2 ϖ k ϖ * Φ 2 1 α α T ( ϖ k ) ϖ k Φ 2
which implies that { ϖ k + 2 ϖ * Φ 2 } is monotonically decreasing and bounded, and is thus convergent. Furthermore,
k = 0 ( ϖ 2 ( k + 1 ) ϖ * Φ 2 ϖ 2 k ϖ * Φ 2 ) 1 α α k = 0 T ( ϖ 2 k ) ϖ 2 k Φ 2 ,
that is, k = 0 T ( ϖ 2 k ) ϖ 2 k Φ 2 ϖ 0 ϖ * Φ 2 < , and hence
T ( ϖ 2 k ) ϖ 2 k 0 .
Note that if { ϖ 2 k } is bounded, there exists a convergent subsequence { ϖ 2 n k } ϖ ˜ for some limit ϖ ˜ .
Let k in (26), we have T ( ϖ ˜ ) ϖ ˜ which implies ϖ ˜ is a fixed point of T, thus { ϖ 2 k ϖ ˜ Φ 2 } converges. Since { ϖ 2 n k ϖ ˜ Φ 2 } 0 , { ϖ 2 k } converges to ϖ ˜ .
(ii) If T is restricted nonexpansive since it is 2 3 -restricted averaged, then one obtains
ϖ 2 k + 1 ϖ ˜ = T ( ϖ 2 k ) T ( ϖ ˜ ) ϖ 2 k ϖ ˜ ,
which implies that the sequence { ϖ 2 k + 1 } converges to the same limit of { ϖ 2 k } , and thus { ϖ k } converges to ϖ ˜ . □

5. Numerical Simulation

In this section, we consider a classic Nash–Cournot game over a network as [21], where there are N firms and each firm i { 1 , , N } produces commodities to participate in the competition over m markets (see Figure 1). Each market (denoted by M 1 , , M m ) has limited capacity. Here, the partial-decision information setting is considered where each firm has limited access to its neighboring firms’ information over the communication graph as in Figure 2.
We assume that firm i participates in n i markets by producing x i R n i amount of commodities and its production is limited by the set Ω i R n i . The local matrix A i R m × n i for firm i represents which markets it participates in. Specifically, for the j-th column of A i , its k-th element is 1 if and only if firm i delivers [ x i ] j amount of production to market M k ; all other elements are 0 . Each market M k has a maximal capacity of r k > 0 , that is, A x r , where A = [ A 1 , , A N ] , x = col ( x i ) i N R n , n = i = 1 N n i and r = col ( r k ) k = 1 , , m R m . Suppose that each firm i has the production cost c i ( x i ) : Ω i R , and the price function P : R n R m maps the total supply of each market to the market’s price vector. The local objective function of firm i is J i ( x i , x i ) = c i ( x i ) ( P ( A x ) ) A i x i .
Suppose N = 20 and m = 7 . Let Ω i = { x i R n i | 0 x i Θ i } , where each component of Θ i is randomly drawn from ( 5 , 10 ) . r k is randomly drawn from ( 1 , 2 ) . The local cost function of firm i is c i ( x i ) = x i H i x i + h i x i , where H i is a diagonal matrix with the elements randomly drawn from ( 1 , 8 ) and h i is randomly drawn from ( 1 , 2 ) . The price function is taken as the linear function P = P ¯ D A x with P ¯ = col ( P ¯ k ) k = 1 , , m R m and D = diag ( d k ) k = 1 , , m R m × m , where P ¯ k and d k are randomly drawn from ( 10 , 20 ) and ( 1 , 3 ) , respectively. Set the step-sizes as c = 100 , τ i = 0.003 , ν i = 0.02 , and σ i = 0.003 .
First, Figure 3 shows that the convergence to the v-GNE can be guaranteed under Algorithms 1 and 2, and the trajectories of the local decision x i , k of firms 1 , 6 , 10 , 11 are displayed in Figure 4. It can be seen in Figure 5 that the estimates on the firms 1 and 3 asymptotically tend to their real actions by using the proposed algorithms.
Then, it can be seen from Figure 6, that both of the proposed Algorithms 1 and 2 converge to the GNE x * with a faster convergence as compared with ([21], [Alg. 1]), where Algorithm 1 has the fastest convergent rate. From Figure 7 we can see that the proposed Algorithm 1 also has a faster convergence rate than ([25], [Alg. 3]). We set α = 4.3 × 10 3 in ([25], [Alg. 3]), the same step-sizes τ i , ν i and σ i and the same other parameters as Algorithms 1 and 2 in ([21], [Alg. 1]) and ([25], [Alg. 3]). On the other hand, as compared with the algorithm with inertia, Algorithm 1 with alternating inertia requires less computation resources. Thus, Algorithm 1 could be the best choice when both fast convergence rate and low computation cost are taken into consideration.
Remark 2.
It is worthwhile to note that the introduction of the inertia and overrelaxation steps has the potential of accelerating the convergence rate. As such, in this paper, the inertial and overrelaxed distributed algorithms are developed based on the pseudo-gradient method for seeking generalized Nash equilibrium in multi-player games. The similar inertia idea has been considered in the proximal-point algorithm (see ([25], [Alg. 3])). However, the proximal-point algorithm generally needs to solve the optimization problem at each step k, which may be time-consuming and possibly costs a great amount of computation resources in many situations. As such, pseudo-gradient algorithms with inertia and overrelaxation were constructed in this paper, which successfully guarantees the convergence to v-GNE with a fast convergence rate. Moreover, we note that the introduction of the inertia and overrelaxation steps increases the computation burden, and thus two alternating inertial and overrelaxed algorithms are established in Algorithms 1 and 2 to balance the convergence rate and computation burden. In order to better display the effectiveness of our algorithms, we have added the comparison with ([25], [Alg. 3]) in the simulation part (see Figure 7). From Figure 7, it can be seen that Algorithm 1 in this paper outperforms the ([25], [Alg. 3]) in terms of the convergence rate.

6. Conclusions

This paper has studied the GNE computation issue in multi-player games with shared coupling constraints under the partial-decision information setting. Two distributed algorithms with alternating inertia and alternating overrelaxation have been developed, respectively, with fixed step-sizes. Both algorithms have guaranteed the convergence to the GNE under a mild assumption, which have the potential of improving the convergence rate and saving computation cost. Finally, one simulation example has been provided to show the effectiveness of the proposed algorithms. Further research topics can be focused on stochastic NE seeking problems subject to time-varying topologies with and without event-triggered communication protocols.

Author Contributions

Z.F.: Conceptualization, writing, editing and methodology; W.X.: Conceptualization, writing, editing and methodology; J.C.: Review, editing and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Jiangsu Province under Grant BK20180367, the National Natural Science Foundation of China under Grant 61803082 and the Fundamental Research Funds for the Central Universities, and in part by the Alexander von Humboldt Foundation of Germany. This work was also supported by ZhiShan Youth Scholar Program from Southeast University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Myerson, R.B. Game Theory; Harvard University Press: Cambridge, MA, USA, 2013. [Google Scholar]
  2. Arrow, K.J.; Debreu, G. Existence of an equilibrium for a competitive economy. Econometrica 1954, 22, 265–290. [Google Scholar] [CrossRef]
  3. Pang, J.-S.; Scutari, G.; Facchinei, F.; Wang, C.X. Distributed power allocation with rate constraints in Gaussian frequency-selective interference channels. IEEE Trans. Inf. Theory 2007, 54, 3471–3489. [Google Scholar] [CrossRef]
  4. Liu, Y.; Guo, L.; Lu, C.; Chai, Y.; Gao, S.; Xu, B. A fully distributed voltage optimization method for distribution networks considering integer constraints of step voltage regulators. IEEE Access 2019, 7, 60055–60066. [Google Scholar] [CrossRef]
  5. Breton, M.; Zaccour, G.; Zahaf, M. A game-theoretic formulation of joint implementation of environmental projects. Eur. J. Oper. Res. 2006, 168, 221–239. [Google Scholar] [CrossRef]
  6. Ardagna, D.; Panicucci, B.; Passacantando, M. A game theoretic formulation of the service provisioning problem in cloud systems. In Proceedings of the 20th International Conference on World Wide Web, Hyderabad, India, 28 March–1 April 2011; pp. 177–186. [Google Scholar]
  7. Pang, J.-S.; Scutari, G.; Palomar, D.P.; Facchinei, F. Design of cognitive radio systems under temperature-interference constraints: A variational inequality approach. IEEE Trans. Signal Process. 2010, 58, 3251–3271. [Google Scholar] [CrossRef]
  8. Tang, Y.; Gao, H.; Zhang, W.; Kurths, J. Leader-following consensus of a class of stochastic delayed multi-agent systems with partial mixed impulses. Automatica 2015, 53, 346–354. [Google Scholar] [CrossRef]
  9. Zhao, C.; Xiang, B.; Chen, B. Fusion estimation for discrete time-varying systems with bounded nonlinearities. IEEE Access 2019, 7, 27097–27105. [Google Scholar] [CrossRef]
  10. Brückner, M.; Kanzow, C.; Scheffer, T. Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 2012, 13, 2617–2654. [Google Scholar]
  11. Xu, M.; Zhang, Y.; Zhang, D.; Chen, B. Distributed robust dimensionality reduction fusion estimation under dos attacks and uncertain covariances. IEEE Access 2021, 9, 10328–10337. [Google Scholar] [CrossRef]
  12. Holt, C.A.; Roth, A.E. The nash equilibrium: A perspective. Proc. Natl. Acad. Sci. USA 2004, 101, 3999–4002. [Google Scholar] [CrossRef] [Green Version]
  13. Debreu, G. A social equilibrium existence theorem. Proc. Natl. Acad. Sci. USA 1952, 38, 886–893. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Dutang, C. A Survey of GNE Computation Methods: Theory and Algorithms; HAL: Lyon, France, 2013. [Google Scholar]
  15. Facchinei, F.; Kanzow, C. Generalized Nash equilibrium problems. Ann. Oper. Res. 2010, 175, 177–211. [Google Scholar] [CrossRef]
  16. Yi, P.; Pavel, L. A distributed primal-dual algorithm for computation of generalized Nash equilibria via operator splitting methods. In Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control, Melbourne, VIC, Australia, 12–15 December 2017; pp. 3841–3846. [Google Scholar]
  17. Carlson, F.; Krupar, K.R. Real and complex monotone communication games. Engl. J. 2014, 64, 65. [Google Scholar] [CrossRef]
  18. Zhu, Y.; Yu, W.; Wen, G.; Chen, G.; Ren, W. Continuous-time distributed subgradient algorithm for convex optimization with general constraints. IEEE Trans. Autom. Control 2018, 64, 1694–1701. [Google Scholar] [CrossRef]
  19. Frihauf, P.; Krstic, M.; Basar, T. Nash equilibrium seeking in noncooperative games. IEEE Trans. Autom. Control 2011, 57, 1192–1207. [Google Scholar] [CrossRef]
  20. Bimpikis, K.; Ehsani, S.; Ilkılıç, R. Cournot competition in networked markets. Manag. Sci. 2019, 65, 2467–2481. [Google Scholar] [CrossRef]
  21. Pavel, L. Distributed GNE seeking under partial-decision information over networks via a doubly-augmented operator splitting approach. IEEE Trans. Autom. Control 2020, 65, 1584–1597. [Google Scholar] [CrossRef] [Green Version]
  22. Swenson, B.; Kar, S.; Xavier, J. Empirical centroid fictitious play: An approach for distributed learning in multi-agent games. IEEE Trans. Signal Process. 2015, 63, 3888–3901. [Google Scholar] [CrossRef]
  23. Koshal, J.; Nediæ, A.; Shanbhag, U.V. Distributed algorithms for aggregative games on graphs. Oper. Res. 2016, 64, 680–704. [Google Scholar] [CrossRef] [Green Version]
  24. Zhu, M.; Frazzoli, E. Distributed robust adaptive equilibrium computation for generalized convex games. Automatica 2016, 63, 82–91. [Google Scholar] [CrossRef] [Green Version]
  25. Bianchi, M.; Belgioioso, G.; Grammatico, S. Fast generalized Nash equilibrium seeking under partial-decision information. arXiv 2020, arXiv:2003.09335. [Google Scholar]
  26. Pavel, L. A doubly-augmented operator splitting approach for distributed gne seeking over networks. In Proceedings of the 2018 IEEE Conference on Decision and Control, Miami, FL, USA, 17–19 December 2018; pp. 3529–3534. [Google Scholar]
  27. Ye, M.; Hu, G. Distributed nash equilibrium seeking by a consensus based approach. IEEE Trans. Autom. Control 2017, 62, 4811–4818. [Google Scholar] [CrossRef] [Green Version]
  28. Zhu, Y.; Yu, W.; Wen, G.; Chen, G. Distributed nash equilibrium seeking in an aggregative game on a directed graph. IEEE Trans. Autom. Control 2020, 66, 2746–2753. [Google Scholar] [CrossRef]
  29. Yi, P.; Pavel, L. An operator splitting approach for distributed generalized Nash equilibria computation. Automatica 2019, 102, 111–121. [Google Scholar] [CrossRef] [Green Version]
  30. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  31. Facchinei, F.; Pang, J.-S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  32. Franklin, J.N. Matrix Theory; Courier Corporation: North Chelmsford, MA, USA, 2012. [Google Scholar]
Figure 1. Network of Nash–Gournot game, an arrow from i to M k means firm i participates in market M k ’s competition.
Figure 1. Network of Nash–Gournot game, an arrow from i to M k means firm i participates in market M k ’s competition.
Fractalfract 05 00062 g001
Figure 2. Communication graph among all firms; an edge from i to j means firm i and j can exchange information through the graph.
Figure 2. Communication graph among all firms; an edge from i to j means firm i and j can exchange information through the graph.
Fractalfract 05 00062 g002
Figure 3. The trajectories of x k + 1 x k generated by Algorithms 1 and 2.
Figure 3. The trajectories of x k + 1 x k generated by Algorithms 1 and 2.
Fractalfract 05 00062 g003
Figure 4. The trajectories of local decisions x i , k of firms 1 , 6 , 10 and 11 by Algorithms 1 and 2, respectively.
Figure 4. The trajectories of local decisions x i , k of firms 1 , 6 , 10 and 11 by Algorithms 1 and 2, respectively.
Fractalfract 05 00062 g004
Figure 5. The trajectories of the estimate variable x j 1 from firms 1–6 generated by Algorithm 1 (left); and the trajectories of the estimate variable x j 3 from firm 1–6 generated by Algorithm 2 (right).
Figure 5. The trajectories of the estimate variable x j 1 from firms 1–6 generated by Algorithm 1 (left); and the trajectories of the estimate variable x j 3 from firm 1–6 generated by Algorithm 2 (right).
Fractalfract 05 00062 g005
Figure 6. Relative error x k x * 2 / x * 2 generated by ([21], [Algorithm 1]), Algorithms 1 and 2.
Figure 6. Relative error x k x * 2 / x * 2 generated by ([21], [Algorithm 1]), Algorithms 1 and 2.
Fractalfract 05 00062 g006
Figure 7. Relative error x k x * 2 / x * 2 generated by Algorithm 1 and ([25], [Alg. 3]).
Figure 7. Relative error x k x * 2 / x * 2 generated by Algorithm 1 and ([25], [Alg. 3]).
Fractalfract 05 00062 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, Z.; Xu, W.; Cao, J. Alternating Inertial and Overrelaxed Algorithms for Distributed Generalized Nash Equilibrium Seeking in Multi-Player Games. Fractal Fract. 2021, 5, 62. https://doi.org/10.3390/fractalfract5030062

AMA Style

Feng Z, Xu W, Cao J. Alternating Inertial and Overrelaxed Algorithms for Distributed Generalized Nash Equilibrium Seeking in Multi-Player Games. Fractal and Fractional. 2021; 5(3):62. https://doi.org/10.3390/fractalfract5030062

Chicago/Turabian Style

Feng, Zhangcheng, Wenying Xu, and Jinde Cao. 2021. "Alternating Inertial and Overrelaxed Algorithms for Distributed Generalized Nash Equilibrium Seeking in Multi-Player Games" Fractal and Fractional 5, no. 3: 62. https://doi.org/10.3390/fractalfract5030062

APA Style

Feng, Z., Xu, W., & Cao, J. (2021). Alternating Inertial and Overrelaxed Algorithms for Distributed Generalized Nash Equilibrium Seeking in Multi-Player Games. Fractal and Fractional, 5(3), 62. https://doi.org/10.3390/fractalfract5030062

Article Metrics

Back to TopTop