Next Article in Journal
On 2-Rainbow Domination of Generalized Petersen Graphs P(ck,k)
Previous Article in Journal
How Connected Is China’s Systemic Financial Risk Contagion Network?—A Dynamic Network Perspective Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computing Nash Equilibria for Multiplayer Symmetric Games Based on Tensor Form

School of Mathematical Sciences, Guizhou Normal University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(10), 2268; https://doi.org/10.3390/math11102268
Submission received: 23 April 2023 / Revised: 7 May 2023 / Accepted: 11 May 2023 / Published: 12 May 2023

Abstract

:
In an m-person symmetric game, all players are identical and indistinguishable. In this paper, we find that the payoff tensor of the player k in an m-person symmetric game is k-mode symmetric, and the payoff tensors of two different individuals are the transpose of each other. Furthermore, we reformulate the m-person symmetric game as a tensor complementary problem and demonstrate that locating a symmetric Nash equilibrium is equivalent to finding a solution to the resulting tensor complementary problem. Finally, we use the hyperplane projection algorithm to solve the resulting tensor complementary problem, and we present some numerical results to find the symmetric Nash equilibrium.

1. Introduction

A game is symmetric if all players have the same strategy set and the payoff of a given strategy is determined merely by the strategy itself and has nothing to do with who plays it. Since the generation of game theory, symmetrical games have played an important role in life sciences, economics, and statistical physics. As early as 1951, Nash [1,2] proposed a class of important equilibrium, the Nash equilibrium, which is a strategic profile. Especially, when all of the players in the game adopt this strategy, no one within the game can improve his or her profits by changing his or her strategy unilaterally. For example, Nash [1,2] proposed a very important concept of equilibrium, called the Nash equilibrium, which is a strategy profile in which each player’s strategy is an optimal response to the strategies of the other players. This concept has the nice property that there is at least one Nash equilibrium for every finite game, and every finite symmetric game leads to a symmetric Nash equilibrium (an equilibrium in which all players use the same strategy) [1,2,3,4]. Some researchers were devoted to symmetric games; for additional statements, one can refer to [5,6,7,8,9,10,11,12,13,14,15]. In [16], Passacantando and Raciti offered an improved Nash equilibrium problem in which everyone is simulated as a node of the network, and the player’s own actions as well as the actions of their network neighbors have a great influence on the utility function of the player.
The symmetric game, in particular, is important in the study of economic and biological models [17,18,19,20]. As we know, many interactions have been modeled in terms of one shot and some of them have been reported as symmetric two-player cooperative dilemmas, including the Prisoner’s Dilemma [21,22], Hawk–Dove Game [22], Snowdrift Game [23], and Stag-Hunt Game [24]. If each player has n actions to choose in a two-player symmetric game, the game can be represented by two n-by-n matrices [25,26]. It is known that the the-player symmetric game can be formulated as a linear complementary problem. The Lemke–Howson algorithm [25] is a well-known method which was designed to solve the above-mentioned linear complementary problem.
However, in many real-life situations, decisions are made by groups of people that include more than two individuals [20]. This type of collective-action problem is better described as an m-person symmetric game [27,28,29]. We prefer to find symmetric equilibria for an m-person symmetric game because asymmetric behavior appears relatively unintuitive [6] and difficult to explain in a one-shot interaction [30]. Nash proved that there is a symmetric equilibrium in every m-person symmetric game [2], but the Nash-equilibrium equations for m-person games are non-linear, which are difficult to solve analytically in general.
To better address the m-person game, Huang and Qi [31] extended the classic form of a game to a tensor form using a tensor to represent the utility function of each player. They demonstrated that solving the Nash equilibrium for multiplayer games is equivalent to solving a tensor complementarity problem. However, the received tensor has a larger scale, and its order is equal to the product of the number of people in the game and the total number of pure strategies related to the game.
They reformulated the m-person game as a tensor complementary problem, and showed that finding a Nash equilibrium of the m-person is equivalent to finding a solution to the resulting tensor complementary problem. Abdou et al. [32] presented an efficient method to solve the pure Nash equilibrium using tensor operations. However, they did not take the Nash equilibrium of a multiplayer symmetric game into consideration.
In this paper, we consider tensor-based m-person symmetric games. We prove that the payoff tensor of two different players in an m-person symmetric game is transposed to each other, so that the payoff tensor of all players can be uniquely determined by the payoff tensor of the first player in the game. We also reformulate the m-person symmetric games as a tensor complementary problem. It is worth noting that the order of tensors in the tensor complementary problem is less than that resulting in [31]. In addition, we demonstrate that solving an m-person symmetric game’s Nash equilibrium is equivalent to solving an m-order tensor complementarity problem. Because of the exploitation of the symmetry property, the size of tensor is smaller than that of mentioned in [31]. Finally, we solve the tensor complementarity problem by using the hyperplane projection, which can be applied to the m-person Volunteer’s Dilemma, the m-person Snowdrift Game, and some randomly constructed multi-player symmetric games.
The rest of this paper is organized as follows: In Section 2, we present some definitions and notations that will be frequently used in the following parts. In Section 3, we provide a brief description of the m-person symmetric game. Some experimental results that can be used to explain our proposed theory’s reliability are shown in Section 4. Finally, we provide a brief summary in Section 5.

2. Preliminaries

In this section, we introduce some definitions and notations, which will be used in the sequel. Throughout this paper, we use small letters (e.g., a), small bold letters (e.g., a ), capital letters (e.g., A), and calligraphic letters (e.g., A ) to denote scalars, vectors, matrices, and tensors, respectively. For a positive integer n, let [ n ] = { 1 , 2 , , n } .
A real m-th order n 1 × n 2 × × n m -dimensional tensor is a multidimensional array, and its elementwise form can be denoted as
A = ( a i 1 i 2 i m ) , i j [ n j ] , j = 1 , 2 , , m .
When m = 2 , A is an n 1 -by- n 2 matrix. If n 1 = n 2 = = n m = n , A is called a real m-th order n-dimensional tensor. We denote the set of all real m-th order n 1 × n 2 × × n m -dimensional tensors by R n 1 × n 2 × × n m and denote the set of all real m-th order n-dimensional tensors by R [ m , n ] . When m = 1 , R [ 1 , n ] is simplified as R n , which is the set of all n-dimensional real vectors. We denote the set of all non-negative n-dimensional real vectors by R + n . Let 1 n , e i , and 0 denote the n-dimensional vector of all ones, the i-th column of the n-dimensional identity matrix, and the zero vector, respectively. The order x 0 means that each component of x is non-negative.
The definition of the k-mode product of a tensor with a vector is recalled as follows.
Definition 1 
([33]). The k-mode (vector) product of a tensor A = ( a i 1 i k i m ) R n 1 × × n k × × n m multiplied by a vector v = ( v i ) R n k is denoted by A × ¯ k v , which leads to a real ( m 1 ) -th order n 1 × × n k 1 × n k + 1 × × n m -dimensional tensor with
( A × ¯ k v ) i 1 i k 1 i k + 1 i m = i k = 1 n k a i 1 i k i m v i k ,
for any i j [ n j ] with j [ m ] \ { k } .
Let A = ( a i 1 i 2 i m ) R n 1 × n 2 × × n m and u ( k ) = ( u i j ( k ) ) R n k with k [ m ] . According to Definition 1, it follows that
A × ¯ m u ( m ) × ¯ m 1 × ¯ 2 u ( 2 ) × ¯ 1 u ( 1 ) = i 1 = 1 n 1 i 2 = 1 n 2 i m = 1 n m a i 1 i 2 i m u i m ( m ) u i 2 ( 2 ) u i 1 ( 1 ) ,
where A × ¯ m u ( m ) × ¯ m 1 × ¯ 2 u ( 2 ) is an n 1 -dimensional vector with entries
( A × ¯ m u ( m ) × ¯ m 1 × ¯ 2 u ( 2 ) ) i = i 2 = 1 n 2 i m = 1 n m a i i 2 i m u i m ( m ) u i 2 ( 2 ) .
For simplicity of notion, we use A u ( m ) u ( 2 ) u ( 1 ) to denote the scale
A × ¯ m u ( m ) × ¯ m 1 × ¯ 2 u ( 2 ) × ¯ 1 u ( 1 ) ,
and use A u ( m ) u ( 2 ) to denote the real n 1 -dimensional vector
A × ¯ m u ( m ) × ¯ m 1 × ¯ 2 u ( 2 ) .
Particularly, when n 1 = n 2 = = n m = n , and u , u * R n , we take the symbols A u m , A u * m 1 u , and A u * m k u u * k 1 to denote the scales
A × ¯ m u × ¯ m 1 × ¯ 2 u × ¯ 1 u ,
A × ¯ m u * × ¯ m 1 × ¯ 2 u * × ¯ 1 u ,
and
A × ¯ m u * × ¯ m 1 × ¯ k + 1 u * × ¯ k u × ¯ k 1 u * × ¯ 2 u * × ¯ 1 u * ,
respectively. The symbol A u m 1 is short for the n-dimensional vector A × ¯ m u × ¯ m 1 × ¯ 2 u .
The definitions of symmetric and semi-symmetric tensors are given as follows.
Definition 2 
([34,35]). 2A tensor A = ( a i 1 i 2 i m ) R [ m , n ] is called symmetric, if
a i 1 i 2 i m = a i σ ( 1 ) i σ ( 2 ) i σ ( m ) , σ Π m ,
where Π m is the set of permutations of length m.
Definition 3 
([33]). A tensor A = ( a i 1 i 2 i m ) R [ m , n ] is called semi-symmetric, if for any i 1 [ n ] ,
a i 1 i 2 i m = a i 1 i σ ( 2 ) i σ ( m ) , σ Π m 1 .
We provide a more general definition for the k-mode symmetric of a tensor.
Definition 4. 
A tensor A R [ m , n ] is called k-mode symmetric, if for any i k [ n ] ,
a i 1 i k 1 i k i k + 1 i m = a i σ ( 1 ) i σ ( k 1 ) i k i σ ( k + 1 ) i σ ( m ) , σ Π m 1 .
Remark 1. 
When k = 1 , A is 1-mode symmetric if and only if A is semi-symmetric. An m-th order n-dimensional tensor A has n m independent entries. If A is k-mode symmetric, then A has n m + n 2 m 1 independent entries. If A is symmetric, then A has only m + n 1 m independent entries.
By Definitions 1, 2 and 4, we have the following lemma.
Lemma 1. 
Let A R [ m , n ] . Then A is k-mode symmetric if and only if A × ¯ k e i R [ m 1 , n ] is a symmetric tensor for all i [ n ] .
Proof. 
From Definition 1, we obtain
( A × ¯ k e i ) j 1 j k 1 j k + 1 j m = j k = 1 n a j 1 j k 1 j k j k + 1 j m ( e i ) j k = a j 1 j k 1 i j k + 1 j m ,
which implies that for all i [ n ] ,
a j 1 j k 1 i j k + 1 j m = a j σ ( 1 ) j σ ( k 1 ) i j σ ( k + 1 ) j σ ( m ) , σ Π m 1 ,
if and only if
( A × ¯ k e i ) j 1 j k 1 j k + 1 j m = ( A × ¯ k e i ) j σ ( 1 ) j σ ( k 1 ) j σ ( k + 1 ) j σ ( m ) , σ Π m 1 .
According to Definitions 2 and 4, A is k-mode symmetric if and only if A × ¯ k e i R [ m 1 , n ] is a symmetric tensor for all i [ n ] .    □
In [36], the definition of the transpose of a tensor was introduced as follows:
Definition 5 
([36]). Let A = ( a i 1 i 2 i m ) R [ m , n ] and σ Π m . The σ-transpose of A is an m-th order n-dimensional tensor, denoted by A < σ > = ( b i 1 i 2 i m ) R [ m , n ] , with entries
b i 1 i 2 i m = a i σ ( 1 ) i σ ( 2 ) i σ ( m ) .
Remark 2. 
When m = 2 , A is an n-by-n matrix. Let σ Π 2 with σ ( 1 ) = 2 and σ ( 2 ) = 1 ; then, A < σ > reduces to its matrix transpose.
We recall the psdudomonotone mapping and projection operator, which will be used in Section 4.
Definition 6 
([37]). A mapping F : R + n R n is said to be pseudomonotone on R + n if every x and y is in R + n , and it follows that if ( x y ) F ( y ) 0 is true then ( x y ) F ( x ) 0 is true.
Definition 7 
([37]). Let X be a non-empty convex subset of R n . P X denotes the projection operator that maps from R ( n ) to X and is defined as
P X ( y ) = arg min { x y : x X } , y R n ,
where · is l 2 -norm in R n .
Remark 3. 
When X = R + n , P R + n ( y ) is simplified as [ y ] + . The projection operator [ y ] + can be computed as follows
[ y ] + = max { 0 , y } ,
where the max operator denotes the componentwise maximum of two vectors.

3. Description of the m -Person Symmetric Game

In this section, we first provide the definition of a tensor form of an m-person game, adapted from [31,32].
Definition 8 
([31,32]). An m-person game in a tensor form is a tuple
G = ( [ m ] ; { [ n k ] } k = 1 m ; { A ( k ) } k = 1 m ) ,
where [ m ] is the set of players; [ n k ] is the pure strategy set of player k; and A ( k ) = ( a i 1 i 2 i m ( k ) ) R n 1 × n 2 × × n m is the payoff tensor of player k, that is, for any i j [ n j ] with any j [ m ] , if player 1 plays his i 1 -th pure strategy, player 2 plays his i 2 -th pure strategy, ⋯, and player m plays his i m -th pure strategy, then the payoffs of player 1, player 2, ⋯, and player m are a i 1 i 2 i m ( 1 ) , a i 1 i 2 i m ( 2 ) , ⋯, and a i 1 i 2 i m ( m ) , respectively.
Given an m-person game G = ( [ m ] ; { [ n k ] } k = 1 m ; { A ( k ) } k = 1 m ) . A mixed strategy of player k is the probability distribution on the pure-strategy set [ n k ] . Let
Ω k = { u R n k : u 0 and 1 n k u = 1 }
denote the set of the all probability distributions on [ n k ] . For u ( k ) = ( u i j ( k ) ) Ω k , the probability assigned to the i j -th pure strategy of player k is u i j ( k ) .
Using
Ω : = × k [ m ] Ω k = { ( u ( 1 ) , , u ( m ) ) : u ( k ) Ω k , k = 1 , , m }
denote the set of strategy profiles. We say that ( u ( 1 ) , , u ( m ) ) Ω is a mixed-strategy combination if u ( k ) Ω k is a mixed strategy of player k for any k [ m ] . If a mixed-strategy combination ( u ( 1 ) , , u ( m ) ) Ω is played, then the expected payoff of player k is
A ( k ) u ( m ) u ( 2 ) u ( 1 ) = i 1 = 1 n 1 i m = 1 n m a i 1 i 2 i m ( k ) u i m ( m ) u i 2 ( 2 ) u i 1 ( 1 ) .
Definition 9 
([2,31]). A mixed strategy combination ( u ( 1 * ) , , u ( m * ) ) Ω is a Nash equilibrium of the m-person game, if for each strategy combination ( u ( 1 ) , , u ( m ) ) Ω and k [ m ] , it holds that
A ( k ) u ( m * ) u ( 1 * ) A ( k ) u ( m * ) u ( k + 1 * ) u ( k ) u ( k 1 * ) u ( 1 * ) .
What we consider here is the m-person symmetric games. In such games, everyone in the game plays the same role. In other words, their earnings are solely determined by their own strategies and a combination of strategies utilized by the other players in the game and have nothing to do with their position in the game.
Definition 10. 
An m-person game G = ( [ m ] ; { [ n k ] } k = 1 m ; { A ( k ) } k = 1 m ) is symmetric if the players have an identical pure-strategy set, that is, n 1 = n 2 = = n m = n , and every payoff tensor A ( k ) R [ m , n ] satisfies
a i 1 i 2 i m ( k ) = a i σ ( 1 ) i σ ( 2 ) i σ ( m ) ( σ 1 ( k ) ) , σ Π m ,
where σ 1 is the inverse of the permutation σ.
It is well known that the payoff matrices of the two-person symmetric game are transpose of each other [21]. Furthermore, we come to a similar conclusion for the m-person symmetric game.
Theorem 1. 
Suppose that G = ( [ m ] ; { [ n k ] } k = 1 m ; { A ( k ) } k = 1 m ) is an m-person game. Then G is symmetric if and only if
(i)
A ( k ) is k-mode symmetric for all k [ m ] ;
(ii)
If k j , then ( A ( k ) ) < σ > = A ( j ) , for all σ Π m satisfies σ ( k ) = j .
Proof. 
Firstly, we show the necessity. By Definition 10, n 1 = n 2 = = n m = n . Combining it with Definition 1, we have
( A ( k ) × ¯ k e i ) j 1 j k 1 j k + 1 j m = j k = 1 n a j 1 j k 1 j k j k + 1 j m ( k ) ( e i ) j k = a j 1 j k 1 i j k + 1 j m ( k ) .
Let p Π m 1 be a permutation on set [ m ] \ { k } and π Π m satisfies
π ( i ) = p ( i ) , if i [ m ] \ { k } , k , if i = k .
By (1) and Definition 10, we have
( A ( k ) × ¯ k e i ) j p ( 1 ) j p ( k 1 ) j p ( k + 1 ) j p ( m ) = a j p ( 1 ) j p ( k 1 ) i j p ( k + 1 ) j p ( m ) ( k ) = a j π ( 1 ) j π ( k 1 ) i j π ( k + 1 ) j π ( m ) ( π 1 ( k ) ) = a j 1 j k 1 i j k + 1 j m ( k ) = ( A ( k ) × ¯ k e i ) j 1 j k 1 j k + 1 j m .
Therefore, A ( k ) × ¯ k e i is a symmetric tensor for all i [ n ] . By Lemma 1, A ( k ) is k-mode symmetric. Hence, statement (i) holds.
Since σ Π m satisfies σ ( k ) = j . From Definition 5, we have
( ( A ( k ) ) < σ > ) l 1 l j l k l m = a l σ ( 1 ) l σ ( j ) l σ ( k ) l σ ( m ) ( k ) = a l σ ( 1 ) l σ ( j ) l σ ( k ) l σ ( m ) ( σ 1 ( j ) ) = a l 1 l j l k l m ( j ) = ( A ( j ) ) l 1 l j l k l m ,
which implies ( A ( k ) ) < σ > = A ( j ) . Hence, statement (ii) holds.
Now, we show the sufficiency. n 1 = n 2 = = n m for A ( k ) is k-mode symmetric. For any σ Π m and fixed index j, we have σ ( j ) = j or σ ( k ) = j ( k j ) . To prove our statement we have to talk about it in the following two cases.
Case 1. 
σ ( j ) = j . Let p Π m 1 be a permutation on set [ m ] \ { j } which satisfies p ( i ) = σ ( i ) . Since A ( k ) is k-mode symmetric, we have
a i 1 i j 1 i j i j + 1 i m ( j ) = a i p ( 1 ) i p ( j 1 ) i j i p ( j + 1 ) i p ( m ) ( j ) = a i σ ( 1 ) i σ ( j 1 ) i j i σ ( j + 1 ) i σ ( m ) ( j ) = a i σ ( 1 ) i σ ( j 1 ) i σ ( j ) i σ ( j + 1 ) i σ ( m ) ( σ 1 ( j ) ) .
Case 2. 
σ ( k ) = j ( k j ) . According to ( A ( k ) ) < σ > = A ( j ) , we obtain
a i 1 i j 1 i j i j + 1 i m ( j ) = a i σ ( 1 ) i σ ( j 1 ) i σ ( j ) i σ ( j + 1 ) i σ ( m ) ( k ) = a i σ ( 1 ) i σ ( j 1 ) i σ ( j ) i σ ( j + 1 ) i σ ( m ) ( σ 1 ( j ) ) .
By Definition 10, we have G = ( [ m ] ; { [ n k ] } k = 1 m ; { A ( k ) } k = 1 m ) is an m-person symmetric game.    □
Remark 4.
(I) Consider the m-player Volunteer’s Dilemma [38,39]: m individuals observe an approaching predator and need to decide whether or not to sound the alarm, independently and without coordination. If an individual alarms, the predator’s ambush may be ruined. Each individual has two choices: volunteer (alarm) or ignore (no alarm). An alarm call would benefit everyone because it would deter the predator from attacking again; each player, however, prefers that the other players report the presence of the predator because giving the alarm has a cost c > 0 . If one sets the alarm, he has a payoff of b c , while the others who do not set the alarm have a payoff of b. If nobody raises the alarm, the predator attacks, inflicting damage a > c and a payoff of b a on everyone. Herein, we use the digits “1” and “2" to denote the alarm and non-alarm strategies, respectively. Therefore, the tensor form of the m-player Volunteer’s Dilemma can be represented as G = ( [ m ] , { [ 2 ] } k = 1 m , { A ( k ) } k = 1 m ) , where
a i 1 i k 1 1 i k + 1 i m ( k ) = b c
and
a i 1 i k 1 2 i k + 1 i m ( k ) = b a , if i 1 = = i k 1 = i k + 1 = = i m = 2 , b , otherwise .
By calculation, we obtain that statements (i) and (ii) of Theorem 1 are held. Therefore, the m-player Volunteer’s Dilemma is symmetric.
(II) From Theorem 1, we have the first player’s payoff tensor A ( 1 ) is 1-mode symmetric. The payoffs in m-person symmetric games are uniquely determined by the first player’s payoff tensor A ( 1 ) . Therefore, in the rest of this paper, we denote an m-person symmetric game by the tuple G = ( [ m ] ; [ n ] ; A ( 1 ) ) , and let
Ω 1 = = Ω m = { u R n : u 0 and 1 n u = 1 } .
Definition 11 
([2]). A Nash equilibrium ( u ( 1 * ) , u ( 2 * ) , , u ( m * ) ) Ω is symmetric if all players take the same strategy, that is, u ( 1 * ) = u ( 2 * ) = = u ( m * ) .
Nash proved the existence of equilibrium for m-person games and symmetric equilibrium for symmetric m-person games in 1951 [2], and the statement is summarized in the following Lemma.
Lemma 2 
([2]). Every m-person symmetric game has a symmetric Nash equilibrium.
Now, we are going to propose two equivalent conditions to ensure that a mixed-strategy combination is a symmetric Nash equilibrium, which are shown in Theorems 2 and 3.
Lemma 3. 
Suppose that G = ( [ m ] ; [ n ] ; A ( 1 ) ) is an m-person symmetric game; then, ( x * , x * , , x * ) Ω is a symmetric Nash equilibrium if and only if x * is an optimal solution to the following optimization problem,
max A ( 1 ) x * m 1 x s . t . x { x = ( x i ) R n : x 0 and 1 n x = 1 } .
Proof. 
By Definition 9, ( x * , x * , , x * ) Ω is a symmetric Nash equilibrium if and only if for any strategy combination ( u ( 1 ) , , u ( n ) ) Ω and any k [ m ] , namely,
A ( k ) x * m A ( k ) x * m k u ( k ) x * k 1 .
Let σ Π m with σ ( k ) = 1 , σ ( 1 ) = k , and σ ( i ) = i for all i [ m ] \ { 1 , k } . By Definition 1, we have
A ( k ) x * m = ( A ( k ) ) < σ > x * m
and
A ( k ) x * m k u ( k ) x * k 1 = ( A ( k ) ) < σ > x * m 1 u ( k ) .
Together with Theorem 1, we can obtain ( A ( k ) ) < σ > = A ( 1 ) . Therefore, the inequality (3) is equivalent to
A ( 1 ) x * m A ( 1 ) x * m 1 u ( k ) .
The proof is completed.    □
Let
A = η E A ( 1 ) ,
where η = max i 1 , i 2 , , i m [ n ] a i 1 i 2 i m ( 1 ) + 1 and E R [ m , n ] is an m-th order n-dimensional tensor whose all entries are ones. Aa A ( 1 ) is 1-mode symmetric, we can see that A is also a 1-mode symmetric tensor with all entries are positive.
Theorem 2. 
Suppose that G = ( [ m ] ; [ n ] ; A ( 1 ) ) is an m-person symmetric game, and A is defined as (4), then ( x * , x * , , x * ) Ω is a symmetric Nash equilibrium if and only if x * is an optimal solution to the following optimization problem:
min A x * m 1 x . s . t . x { x = ( x i ) R n : x 0 and 1 n x = 1 }
Proof. 
For any mix strategy ( x * , , x * , x ) Ω , we have
A x * m 1 x = i 1 , , i m 1 , i m = 1 n a i 1 i m 1 i m x i 1 * x i m 1 * x i m = i 1 , , i m 1 , i m = 1 n ( η a i 1 i m 1 i m ( 1 ) ) x i 1 * x i m 1 * x i m = η i 1 , , i m 1 , i m = 1 n a i 1 i m 1 i m ( 1 ) x i 1 * x i m 1 * x i m = η A ( 1 ) x * m 1 x .
Therefore, x * is an optimal solution to the optimization problem (5) if and only if x * is an optimal solution to the optimization problem (2).    □
We consider the following model: Find a mixed-strategy combination ( x * , x * , , x * ) Ω such that x * is an optimal solution to the optimization problem (5). Given a tensor B R [ m , n ] and a vector q R n , the tensor complementary problem, denoted by TCP ( q , B ) , is to find a vector z R n such that
z 0 , B z m 1 + q 0 , z ( B z m 1 + q ) = 0 .
The TCP ( q , B ) was introduced by Song and Qi [40] and was further studied by many researchers [41,42,43,44,45,46,47,48,49,50]. The tensor complementary problem is a generalization of the linear complementary problem (corresponding to m = 2 ) [51] and also a special instance of a nonlinear complementary problem, as well as a particular case of a variational inequality problem corresponding to the close convex cone R + n [37]. Denote the solution set of the TCP( q , B ) by SOL( q , B ). In this section, we show that the m-person symmetric game can be reformulated as a specific tensor complementary problem.
Let G = ( [ m ] ; [ n ] ; A ( 1 ) ) be an m-person symmetric game and A defined as (4). We construct a tensor complementary problem as follows,
y 0 , A y m 1 1 n 0 , y ( A y m 1 1 n ) = 0 .
There is an explicit corresponding relation between the solutions of a symmetric Nash equilibrium of the m-person symmetric game and a solution to the TCP ( 1 n , A ) (6).
Theorem 3. 
Suppose that G = ( [ m ] ; [ n ] ; A ( 1 ) ) is an m-person symmetric game and A defined as (4). If ( x * , x * , , x * ) Ω is a symmetric Nash equilibrium of the game G = ( [ m ] ; [ n ] ; A ( 1 ) ) , then y * SOL ( 1 n , A ) , where
y * = x * A x * m m 1 .
Conversely, if y * SOL ( 1 n , A ) , then y * 0 and ( x * , x * , , x * ) Ω with
x * = y * 1 n y *
is a symmetric Nash equilibrium of the game G = ( [ m ] ; [ n ] ; A ( 1 ) ) .
Proof. 
Firstly, we show the necessity. Suppose that ( x * , x * , , x * ) Ω is a symmetric Nash equilibrium of the m-person symmetric game G = ( [ m ] ; [ n ] ; A ( 1 ) ) . By the Karush–Kuhn–Tucker conditions of problem (5), there exist a number λ * R and a non-negative vector b * R n such that
A x * m 1 λ * 1 n b * = 0
and
1 n x * = 1 , x * 0 , b * 0 , b * x * = 0 .
By (9), we have
A x * m λ * 1 n x * b * x * = 0 ,
which together with (10) yields
λ * = A x * m .
Thus,
y * = 1 λ * m 1 x * 0 .
According to (9), we obtain
A y * m 1 1 n = 1 λ * ( λ * 1 n + b * ) 1 n = 1 λ * b * 0 .
Combining (12) with (10), we obtain
y * ( A y * m 1 1 n ) = 1 λ * m 1 x * b * = 0 .
Applying the results of (12) and (13) to (11), we obtain that y * defined by (7) is a solution to the TCP ( 1 n , A ) . That is, y * SOL ( 1 n , A ) .
Now, we show the sufficiency. Suppose that y * SOL ( 1 n , A ) , then
y * 0 , A y * m 1 1 n 0 , y * ( A y * m 1 1 n ) = 0 .
By the first and second inequalities of (14), we have y * 0 . Next, we prove that ( x * , x * , , x * ) Ω defined by (8) is a symmetric Nash equilibrium of the m-person symmetric game G = ( [ m ] ; [ n ] ; A ( 1 ) ) . For this purpose, we need to prove that there exists a number λ * R and a non-negative vector b * R n such that (9) and (10) hold.
We obtain the following equation from the iequalities of (14).
A y * m 1 n y * = 0 .
Since y * 0 and y * 0 follow, this implies that 1 n y * > 0 follows, and then
A y * 1 n y * m 1 ( 1 n y * ) m 1 = 0 .
By (8), the above equality becomes
A x * m 1 ( 1 n y * ) m 1 = 0 .
From y * 0 , 1 n y * > 0 , and the definition of x * , it follows that x * 0 and 1 n x * = 1 . In addition, by the second inequality of (14), we have
A x * m 1 1 n ( 1 n y * ) m 1 0 ,
which implies that there exists a non-negative vector b * R n such that
b * = A x * m 1 1 n ( 1 n y * ) m 1 .
The above result together with (15) yields
b * x * = x * A x * m 1 1 n ( 1 n y * ) m 1 = A x * m x * 1 n ( 1 n y * ) m 1 = A x * m 1 ( 1 n y * ) m 1 = 0 .
Thus, we obtain that (9) and (10) hold with
λ * = 1 ( 1 n y * ) m 1 .
Therefore, ( x * , x * , , x * ) Ω defined by (8) is a symmetric Nash equilibrium of the m-person symmetric game G = ( [ m ] ; [ n ] ; A ( 1 ) ) .    □

4. Algorithm and Numerical Results

The hyperplane projection algorithm [37] is well-known for being a fast and effective method for solving complementary problems. In this section, we apply the hyperplane projection algorithm to solve the TCP ( 1 n , A ) and provide some preliminary numerical results for solving the m-person symmetric game.
Let G = ( [ m ] ; [ n ] ; A ( 1 ) ) be an m-person symmetric game and A defined as (4). Denote F ( y ) = A y m 1 1 n ; then, we can rewrite the TCP ( 1 n , A ) (6) as follows,
y 0 , F ( y ) 0 , y F ( y ) = 0 .
It is well known that (16) is equivalent to the variational inequality problem: Find a non-negative vector y R + n such that
( x y ) F ( y ) 0 , x R + n ,
which is equivalent to finding a root of the following equations
y = [ y τ F ( y ) ] + , τ > 0 .
We use the hyperplane projection algorithm, see Algorithm 1, to solve the problem (16), which can be described geometrically as follows. Let τ R be a fixed scale, and y ( k ) R n is a given vector. Firstly, we compute the point [ y ( k ) τ F ( y ( k ) ) ] + . Secondly, based on the Armijo-type search routine, we search the line segment joining the y ( k ) and [ y ( k ) τ F ( y ( k ) ) ] + and obtain a point u ( k ) such that the hyperplane
H k : = { y R n : F ( u ( k ) ) ( y u ( k ) ) = 0 }
strictly separates y ( k ) from SOL ( 1 n , A ) . We project y ( k ) onto H k and obtain a point w ( k ) , and project the point w ( k ) onto R + n ; then, y ( k + 1 ) is obtained. It can be shown that y ( k + 1 ) is closer to SOL ( 1 n , A ) than y ( k ) [37].
From [37], we obtain the following theorems.
Theorem 4. 
Suppose that y ( k ) is not a solution to the TCP ( 1 n , A ) (16). Then there exists a finite integer i k 0 such that (20) holds and
F ( u ( k ) ) ( y ( k ) u ( k ) ) > 0 .
Remark 5. 
If F ( y ) is pseudomonotone on R + n , the inequality (18) shows that the hyperplane H k strictly separates y ( k ) from SOL ( 1 n , A ) . Based on u ( k ) R + n and the inequality (17), it follows that for any solution y * of the TCP ( 1 n , A ) , we have
( u ( k ) y * ) F ( y * ) 0 .
Based on the fact that F ( y ) is pseudomonotone on R + n , then combing (18) and (19) yields
( u ( k ) y * ) F ( u ( k ) ) 0 > ( u ( k ) y ( k ) ) F ( u ( k ) ) .
Theorem 5. 
Suppose that F ( y ) is pseudomonotone on R + n , then there exists y * S O L ( 1 n , A ) such that
lim k y ( k ) = y * .
Algorithm 1 Hyperplane projection algorithm to calculate the symmetric Nash equilibrium
Input: 
The first player payoff tensor A ( 1 ) , a starting vector y ( 0 ) R + n , some scales τ > 0 , 0 < σ < 1 . The maximum number of iterations MI. The tolerable error ϵ .
Output: 
The symmetric Nash equilibrium ( x * , x * , , x * ) .
1:
Compute η = max i 1 , i 2 , , i m [ n ] a i 1 i 2 i m ( 1 ) + 1 , denote A = η E A ( 1 ) .
2:
for k=0,1,…,MI do
3:
    Compute
z ( k ) = [ y ( k ) τ F ( y ( k ) ) ] +
and find the smallest nonnegative integer i k such that with i = i k
F ( 2 i z ( k ) + ( 1 2 i ) y ( k ) ) ( y ( k ) z ( k ) ) σ τ y ( k ) z ( k ) 2 .
4:
    Set
u ( k ) = 2 i k z ( k ) + ( 1 2 i k ) y ( k )
and
w ( k ) = P H k ( y ( k ) ) = y ( k ) F ( u ( k ) ) ( y ( k ) u ( k ) ) F ( u ( k ) ) 2 F ( u ( k ) ) .
5:
    Set y ( k + 1 ) = [ w ( k ) ] + .
6:
    
7:
    if  y ( k + 1 ) y ( k ) < ϵ or k = MI then
       x * = y ( k ) 1 n y ( k ) , break;
8:
    else
       k = k + 1 ;
9:
    end if
10:
end for
In the following, we give some preliminary numerical results of Algorithm 1 for solving the symmetric Nash equilibrium of the m-person symmetric game. Throughout our experiments, we set the maximum number of iterations MI = 2000 and set the tolerable error ϵ = 10 6 . All tests are conducted in MATLAB R2015a with the configuration: Intel(R) Core(TM)i7-7500U CPU 2.70 GHz and 8.00 G RAM. For the following, Examples 1–3, the parameters used in Algorithm 1 were chosen as
τ : = 0.5 , σ : = 0.01 , y ( 0 ) : = 0.1 1 n .
Example 1. 
According to Remark 4, the tensor form of the m-player Volunteer’s Dilemma can be represented as G = ( [ m ] , [ 2 ] , A ( 1 ) ) , where
a 1 i 2 i m ( 1 ) = b c , a 2 i 2 i m ( 1 ) = b a , if i 2 = i 3 = = i m = 2 , b , otherwise .
We use Algorithm 1 to solve the symmetric Nash equilibrium of the m-player Volunteer’s Dilemma and the numerical results are reported in Table 1. In this table, the number of iteration steps are denoted by No.Iter, the CPU time in seconds is denoted by CPU(s). As the algorithm stops the value of y ( k + 1 ) y ( k ) is computed and denoted by Res and the first player’s strategy in symmetric the Nash equilibrium is denoted by SNE( x * ).
Example 2. 
In the m-player Snowdrift Game [28], m individuals are driving on a crossroad which is blocked by a snowdrift. Each individual has the option to cooperate by shoveling the snow or not. The snow needs to be removed before they continue their journey home. Again, everyone wants to go home, but not everyone is willing to shovel. The benefit of getting home is defined as b and the cost of shoveling is defined as c. It can be assumed that the benefit exceeds the cost, i.e., b > c . If everyone shovels, then everyone gets b c m . However, if only k individuals shovel, they get b c k whereas those who refuse to shovel get home for free and get b. Nobody gets anything if everyone refuses to shovel. Let 1 and 2 denote the shovel snow and not shovelling snow strategies, respectively. It is easy to obtain that the m-player Snowdrift Game is symmetric. Let ϕ ( 1 ) denote the number of 1 in i 2 i 3 i m . The tensor form of the m-player Snowdrift Game can be represented as G = ( [ m ] , [ 2 ] , A ( 1 ) ) , where
a 1 i 2 i m ( 1 ) = b c 1 + ϕ ( 1 ) , a 2 i 2 i m ( 1 ) = 0 , i f ϕ ( 1 ) = 0 , b , i f ϕ ( 1 ) 0 .
We use Algorithm 1 to solve the symmetric Nash equilibrium of the m-player Snowdrift Game. The numerical results are reported in Table 2, where No.Iter, CPU(s), Res and SNE ( x * ) are the same as those used in Table 1.
Example 3. 
Consider the bimatrix symmetric game “Rock-Paper-Scissors” [9,52]. The tensor form of “Rock-Paper-Scissors” can be represented as G = ( [ 2 ] , [ 3 ] , A ( 1 ) ) , where
A ( 1 ) = 0 1 1 1 0 1 1 1 0 .
We use Algorithm 1 to solve the symmetric Nash equilibrium of “Rock-Paper-Scissors”. A symmetric Nash equilibrium ( x * , x * ) with
x * = 1 3 , 1 3 , 1 3
is obtained with 23 iteration steps in 0.2921 s.
In [53], Chatterjee pointed out that the Nash equilibrium of a given game is identical to the optimal solution of an optimization model with zero optimal value and developed the SQP algorithm to compute the Nash equilibrium of an n-person game. The main idea of the SQP algorithm is to solve a series of quadratic programming problems using the quasi-Newton method. Next, we compare our proposed method with SQP in [53].
Example 4. 
Consider an m-person symmetric game with n pure strategies G = ( [ m ] , [ n ] , A ( 1 ) ) , where the elements of the payoff tensor A ( 1 ) are randomly generated using the command “tenrand” and 1-mode symmetrized using the command “symmetrize”. For different values of m and n, different symmetric games are generated. In our experiments, for any fixed m and n, the random problems are generated ten times. We use Algorithm 1 and the SQP algorithm to compute the Nash equilibrium of these symmetric games separately and compare their performance. The numerical results are listed in Table 3, where AI (MinI and MaxI) denotes the average (minimal and maximal) number of iterations for solving ten randomly generated problems of each size; and AT (MinT and MaxT) denotes the average (minimal and maximal) CPU time in seconds for solving ten randomly generated problems of each size. For Algorithm 1 and the SQP method, ARes denotes the average value of y ( k + 1 ) y ( k ) and the average absolute error in the objective function of the minimization problem in [53] for ten randomly generated problems of each size when the algorithm stops, respectively. The parameters used in Algorithm 1 were chosen as
τ : = 0.75 , σ : = 1 × 10 4 , y ( 0 ) : = 0.01 * 1 n .
From Table 3, we see that both Algorithm 1 and SQP can efficiently compute the Nash equilibrium of some randomly generated symmetric games. Algorithm 1 requires more CPU time than SQP when the number of pure strategies of the players in the game is small. However, as the number of pure strategies used by players in the game grows, our proposed Algorithm 1 consumes less CPU time than SQP. This is mainly because Algorithm 1 takes advantage of the symmetry of the game, which only needs to store and compute the first player’s payoff tensor. The SQP method, on the other hand, needs to store and compute the payoff tensors of all players in the game, and as the number of pure strategies increases, the number of variables used by SQP to compute the Nash equilibrium increases exponentially compared to Algorithm 1.

5. Conclusions

In this paper, we show that the payoff tensor of the kth player in an m-person symmetric game is k-mode symmetric, and the payoff tensors of two different individuals are the transpose of each other. We also reformulate the m-person symmetric games as tensor complementary problems and demonstrate that finding a symmetric Nash equilibrium of the m-person symmetric game is equivalent to finding a solution to the resulting tensor complementary problem. Finally, we apply the hyperplane projection algorithm to solve the resulting tensor complementary problem and provide some numerical results for solving the symmetric Nash equilibrium.

Author Contributions

Conceptualization, Q.L. (Qilong Liu) and Q.L. (Qingshui Liao); methodology, Q.L. (Qilong Liu) and Q.L. (Qingshui Liao); software, Q.L. (Qilong Liu); writing—original draft preparation, Q.L. (Qilong Liu); writing—review and editing, Q.L. (Qingshui Liao). All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Natural Science Foundation of the Educational Commission of Guizhou Province under Grant Qian-Jiao-He KY Zi [2021]298 and Guizhou Provincial Science and Technology Projects under Grant QKHJC-ZK[2023]YB245.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nash, J. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef] [PubMed]
  2. Nash, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
  3. He, K.; Wu, H.; Wang, Z.; Li, H. Finding Nash equilibrium for imperfect information games via fictitious play based on local regret minimization. Int. J. Intell. Syst. 2022, 37, 6152–6167. [Google Scholar] [CrossRef]
  4. Berthelsen, M.L.T.; Hansen, K.A. On the computational complexity of decision problems about multi-player Nash equilibria. Theor. Comput. Syst. 2022, 66, 519–545. [Google Scholar] [CrossRef]
  5. Palm, G. Evolutionary stable strategies and game dynamics for n-person games. J. Math. Biol. 1984, 19, 329–334. [Google Scholar] [CrossRef]
  6. Kreps, D.M. Game Theory and Economic Modelling; Oxford University Press: Oxford, UK, 1990. [Google Scholar]
  7. Von Stengel, B. Computing equilibria for two-person games. Handb. Game Theory Econ. Appl. 2002, 3, 1723–1759. [Google Scholar]
  8. Govindan, S.; Wilson, R. A global Newton method to compute Nash equilibria. J. Econ. Theory 2003, 110, 65–86. [Google Scholar] [CrossRef]
  9. Cheng, S.F.; Reeves, D.M.; Vorobeychik, Y.; Wellman, M.P. Notes on Equilibria in Symmetric Games. In Proceedings of the 6th International Workshop on Game Theoretic and Decision Theoretic Agents GTDT 2004, New York, NY, USA, 20 July 2004; pp. 71–78. [Google Scholar]
  10. Roughgardenl, T. Computing equilibria in multi-player games. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Vancouver, BC, Canada, 23–25 January 2005; Volume 118, p. 82. [Google Scholar]
  11. Amir, R.; Jakubczyk, M.; Knauff, M. Symmetric versus asymmetric equilibria in symmetric supermodular games. Int. J. Game Theory 2008, 37, 307–320. [Google Scholar] [CrossRef]
  12. Hefti, A. Equilibria in symmetric games: Theory and applications. Theor. Econ. 2017, 12, 979–1002. [Google Scholar] [CrossRef]
  13. Reny, P.J. Nash equilibrium in discontinuous games. Annu. Rev. Econ. 2020, 12, 439–470. [Google Scholar] [CrossRef]
  14. Bilò, V.; Mavronicolas, M. The complexity of computational problems about Nash equilibria in symmetric win-lose games. Algorithmica 2021, 83, 447–530. [Google Scholar] [CrossRef]
  15. Armony, M.; Roels, G.; Song, H. Capacity choice game in a multiserver queue: Existence of a Nash equilibrium. Nav. Res. Log. 2021, 68, 663–678. [Google Scholar] [CrossRef]
  16. Passacantando, M.; Raciti, F. A note on generalized Nash games played on networks. In Nonlinear Analysis, Differential Equations, and Applications; Rassias, T.M., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 365–380. [Google Scholar] [CrossRef]
  17. Bomze, I.M. Non-cooperative two-person games in biology: A classification. Int. J. Game Theory 1986, 15, 31–57. [Google Scholar] [CrossRef]
  18. Myerson, R.B. Nash equilibrium and the history of economic theory. J. Econ. Lit. 1999, 37, 1067–1082. [Google Scholar] [CrossRef]
  19. Smith, J.M. Evolutionary game theory. Physica D 1986, 22, 43–49. [Google Scholar] [CrossRef]
  20. Broom, M.; Rychtár, J. Game-Theoretical Models in Biology; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  21. Rapoport, A.; Chammah, A.M.; Orwant, C.J. Prisoner’s Dilemma: A Study in Conflict and Cooperation; University of Michigan Press: Ann Arbor, MI, USA, 1965; Volume 165. [Google Scholar]
  22. Axelrod, R.; Hamilton, W.D. The evolution of cooperation. Science 1981, 211, 1390–1396. [Google Scholar] [CrossRef] [PubMed]
  23. Sugden, R. The Economics of Rights, Co-Operation and Welfare; Palgrave Macmillan: London, UK, 2004. [Google Scholar]
  24. Skyrms, B. The Stag Hunt and the Evolution of Social Structure; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  25. Lemke, C.E.; Howson, J.T., Jr. Equilibrium points of bimatrix games. J. Soc. Indust. Appl. Math. 1964, 12, 413–423. [Google Scholar] [CrossRef]
  26. Gowda, M.S.; Sznajder, R. A generalization of the Nash equilibrium theorem on bimatrix games. Int. J. Game Theory 1996, 25, 1–12. [Google Scholar] [CrossRef]
  27. Wilson, R. Computing equilibria of n-person games. SIAM J. Appl. Math. 1971, 21, 80–87. [Google Scholar] [CrossRef]
  28. Souza, M.O.; Pacheco, J.M.; Santos, F.C. Evolution of cooperation under N-person snowdrift games. J. Theor. Biol. 2009, 260, 581–588. [Google Scholar] [CrossRef]
  29. Santos, M.D.; Pinheiro, F.L.; Santos, F.C.; Pacheco, J.M. Dynamics of N-person snowdrift games in structured populations. J. Theor. Biol. 2012, 315, 81–86. [Google Scholar] [CrossRef] [PubMed]
  30. Rosenschein, J.S.; Zlotkin, G. Rules of Encounter: Designing Conventions for Automated Negotiation among Computers; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  31. Huang, Z.; Qi, L. Formulating an n-person noncooperative game as a tensor complementarity problem. Comput. Optim. Appl. 2017, 66, 557–576. [Google Scholar] [CrossRef]
  32. Abdou, J.; Safatly, E.; Nakhle, B.; Khoury, A.E. High-dimensional nash equilibria problems and tensors applications. Int. Game Theory Rev. 2017, 19, 1750015. [Google Scholar] [CrossRef]
  33. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  34. Kofidis, E.; Regalia, P.A. On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 2002, 23, 863–884. [Google Scholar] [CrossRef]
  35. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symb. Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  36. Ragnarsson, S.; Van Loan, C.F. Block tensors and symmetric embeddings. Linear Algebra Appl. 2013, 438, 853–874. [Google Scholar] [CrossRef]
  37. Facchinei, F.; Pang, J. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007; Volume II. [Google Scholar]
  38. Murnighan, J.K.; Kim, J.W.; Metzger, A.R. The volunteer dilemma. Admin. Sci. Quart. 1993, 38, 515–538. [Google Scholar] [CrossRef]
  39. Archetti, M. The volunteer’s dilemma and the optimal size of a social group. J. Theor. Biol. 2009, 261, 475–480. [Google Scholar] [CrossRef]
  40. Song, Y.; Qi, L. Properties of some classes of structured tensors. J. Optimiz. Theory App. 2015, 165, 854–873. [Google Scholar] [CrossRef]
  41. Bai, X.; Huang, Z.; Wang, Y. Global uniqueness and solvability for tensor complementarity problems. J. Optimiz. Theory App. 2016, 170, 72–84. [Google Scholar] [CrossRef]
  42. Che, M.; Qi, L.; Wei, Y. Positiv × 10−6 definite tensors to nonlinear complementarity problems. J. Optimiz. Theory App. 2016, 168, 475–487. [Google Scholar] [CrossRef]
  43. Ding, W.; Luo, Z.; Qi, L. P-tensors, P0-tensors, and their applications. Linear Algebra Appl. 2018, 555, 336–354. [Google Scholar] [CrossRef]
  44. Luo, Z.; Qi, L.; Xiu, N. The sparsest solutions to Z-tensor complementarity problems. Optim. Lett. 2017, 11, 471–482. [Google Scholar] [CrossRef]
  45. Song, Y.; Qi, L. Tensor complementarity problem and semi-positive tensors. J. Optimiz. Theory App. 2016, 169, 1069–1078. [Google Scholar] [CrossRef]
  46. Song, Y.; Qi, L. Properties of tensor complementarity problem and some classes of structured tensors. arXiv 2014, arXiv:1412.0113. [Google Scholar]
  47. Xie, S.; Li, D.; Hongru, X. An iterative method for finding the least solution to the tensor complementarity problem. J. Optimiz. Theory App. 2017, 175, 119–136. [Google Scholar] [CrossRef]
  48. Ling, L.; He, H.; Ling, C. On error bounds of polynomial complementarity problems with structured tensors. Optimization 2018, 67, 341–358. [Google Scholar] [CrossRef]
  49. Song, Y.; Yu, G. Properties of solution set of tensor complementarity problem. J. Optimiz. Theory App. 2016, 170, 85–96. [Google Scholar] [CrossRef]
  50. Wang, Y.; Huang, Z.; Bai, X. Exceptionally regular tensors and tensor complementarity problems. Optimi. Method. Softw. 2016, 31, 815–828. [Google Scholar] [CrossRef]
  51. Cottle, R.; Pang, J.; Stone, R. The Linear Complementarity Problem; Society for Industrial and Applied Mathematic: Philadelphia, PA, USA, 1992. [Google Scholar]
  52. Sinervo, B.; Lively, C.M. The rock–paper–scissors game and the evolution of alternative male strategies. Nature 1996, 380, 240. [Google Scholar] [CrossRef]
  53. Chatterjee, B. An optimization formulation to compute Nash equilibrium in finite games. In Proceedings of the 2009 Proceeding of International Conference on Methods and Models in Computer Science (ICM2CS), Delhi, India, 14–15 December 2009; pp. 1–5. [Google Scholar] [CrossRef]
Table 1. The numerical results of the problem in Example 1.
Table 1. The numerical results of the problem in Example 1.
mbacNo.IterCPU(s)ResSNE ( x * )
320041100.05824.18 × 10 7 (0.5000,0.5000)
42410.22779.99 × 10 7 (0.2929,0.7071)
162470.26447.62 × 10 7 (0.6465,0.3235)
321500.39127.42 × 10 7 (0.8232,0.1768)
643210.19199.82 × 10 7 (0.7835,0.2165)
1923720.61769.19 × 10 7 (0.8750,0.1250)
6200411210.87519.18 × 10 7 (0.2422,0.7578)
421010.75239.17 × 10 7 (0.1295,0.8705)
162630.51118.43 × 10 7 (0.3403,0.6597)
32190.15033.53 × 10 7 (0.5000,0.5000)
643110.08183.54 × 10 7 (0.4578,0.5422)
1923140.11524.32 × 10 7 (0.5647,0.4353)
9600640.25140.18915.95 × 10 7 (0.5000,0.5000)
640.5590.53828.54 × 10 7 (0.4548,0.5452)
641300.28265.90 × 10 7 (0.4054,0.5946)
5120.25880.73818.84 × 10 7 (0.6145,0.3855)
5120.5770.73546.79 × 10 7 (0.5796,0.4204)
5121540.63157.15 × 10 7 (0.5415,0.4585)
12600640.252062.18909.43 × 10 7 (0.3960,0.6040)
640.51431.83739.61 × 10 7 (0.3567,0.6433)
6411141.27709.67 × 10 7 (0.3149,0.6851)
5120.25110.54865.97 × 10 7 (0.5000,0.5000)
5120.5530.77639.67 × 10 7 (0.4675,0.5325)
5121270.59748.64 × 10 7 (0.4329,0.5671)
15250020480.12583.14553.90 × 10 7 (0.5000,0.5000)
204864633.61724.56 × 10 7 (0.2193,0.7807)
2048128653.85189.35 × 10 7 (0.1797,0.8203)
10240.1251774.66599.93 × 10 7 (0.4747,0.5253)
102464473.38358.62 × 10 7 (0.1797,0.8203)
1024128854.41027.11 × 10 7 (0.1380,0.8620)
Table 2. The numerical results of the problem in Example 2.
Table 2. The numerical results of the problem in Example 2.
mbcNo.IterCPU(s)ResSNE ( x * )
381710.33208.84 × 10 7 (0.7686,0.2314)
82340.18228.49 × 10 7 (0.6496,0.3504)
105260.16068.69 × 10 7 (0.4418,0.5582)
1081610.82318.92 × 10 7 (0.1886,0.8114)
201200.17299.22 × 10 7 (0.8611,0.1389)
2010270.17197.46 × 10 7 (0.4417,0.5583)
6811360.86968.79 × 10 7 (0.4653,0.5347)
821340.76359.08 × 10 7 (0.3595,0.6405)
1051320.83059.62 × 10 7 (0.2162,0.7838)
1081570.98678.08 × 10 7 (0.0817,0.9183)
201920.57058.62 × 10 7 (0.5711,0.4289)
2010920.69097.21 × 10 7 (0.2162,0.7838)
981950.73842.12 × 10 7 (0.3291,0.6709)
82680.51518.84 × 10 7 (0.2466,0.7534)
105590.44473.20 × 10 7 (0.1436,0.8574)
108840.77848.17 × 10 7 (0.0520,0.9480)
2011641.23359.53 × 10 7 (0.4179,0.5821)
2010560.54544.96 × 10 7 (0.1426,0.8574)
12813703.43769.99 × 10 7 (0.2542,0.7458)
822832.65799.51 × 10 7 (0.1876,0.8124)
1051911.86787.55 × 10 7 (0.1065,0.8935)
1081981.95167.56 × 10 7 (0.0383,0.9617)
2013193.09989.39 × 10 7 (0.3283,0.6717)
20101611.57629.82 × 10 7 (0.1066,0.8934)
15813576.12149.91 × 10 7 (0.2068,0.7932)
822495.36429.38 × 10 7 (0.1512,0.8488)
1051994.59109.94 × 10 7 (0.0850,0.9150)
1082144.71558.20 × 10 7 (0.0304,0.9696)
2013386.12589.69 × 10 7 (0.2699,0.7301)
20101804.72179.65 × 10 7 (0.0850,0.9150)
Table 3. Comparison results of Algorithm 1 and SQP in Example 4.
Table 3. Comparison results of Algorithm 1 and SQP in Example 4.
mnAlgorithm 1SQP
AI/MinI/MaxIAT/MinT/MaxTAResAI/MinI/MaxIAT/MinT/MaxTARes
321297.7/71/20001.9203/0.1314/2.93142.29 × 10 6 14/9/230.0119/0.0086/0.01640.0072
31374.9/118/20002.4910/0.2232/3.91453.19 × 10 6 21.8/12/350.0221/0.0141/0.03790.0049
4380.6/69/20000.8019/0.1503/3.93558.08 × 10 7 41.1/16/1680.0704/0.0229/0.21330.0012
5667.4/153/20001.4309/0.3711/3.96272.10 × 10 6 81.7/37/1560.1838/0.0829/0.34700.0036
6399/172/6510.9705/0.4316/1.54888.95 × 10 7 44.8/25/1320.1944/0.1081/0.58310.0173
7485/87/20001.1525/0.2303/4.34842.48 × 10 6 59.3/26/1190.4642/0.2081/0.92240.0110
8332.2/55/12020.8340/0.1746/2.94187.41 × 10 7 75.5/25/1071.0498/0.3544/1.49110.0140
9128.6/81/2360.3637/0.2370/0.63507.69 × 10 7 80/48/960.9293/1.172/2.33050.0251
10429.2/258/8141.2628/0.7618/2.38947.86 × 10 7 78.7/39/883.093/1.5428/3.46040.0253
42691.3/86/20001.4046/0.1817/4.02378.86 × 10 7 12.2/7/380.0171/0.0090/0.05550.0102
3343.4/138/8470.7876/0.3111/1.91976.26 × 10 7 16.2/10/300.0336/0.0222/0.05810.0109
4762.8/168/20001.9433/0.4276/5.14342.69 × 10 6 31/16/700.2039/0.1077/0.45940.0056
5355/19/10620.9314/0.0577/2.71268.53 × 10 7 70/24/1191.4742/0.5099/2.54180.0038
6410.2/142/18301.1065/0.4015/4.69137.97 × 10 7 82.5/34/1034.7483/1.9861/5.90670.0201
7660.8/29/20001.9983/0.0916/6.06032.55 × 10 5 75.9/34/9010.5529/4.7972/12.51890.0166
812.8/10/150.0434/0.0358/0.05156.09 × 10 7 62.3/34/8118.8267/10.3801/24.41310.0141
99.3/7/130.0337/0.026/0.04585.11 × 10 7 65.1/39/7339.0789/23.6408/43.74140.0165
10327.5/13/69810.655/0.05/2.23762.81 × 10 7 62.6/32/6671.0717/36.8483/74.97980.0215
521296.9/273/20002.7576/0.6484/4.18781.06 × 10 6 15.4/7/300.023/0.0136/0.03962.34 × 10 5
3527.1/14/15441.3769/0.0407/4.0537.93 × 10 7 31.9/11/1370.2725/0.0978/1.17680.0035
4737.7/220/20002.1013/0.6354/5.65628.94 × 10 7 38/17/1151.981/0.9142/5.88070.0039
5820.6/83/20002.5495/0.2650/6.26492.78 × 10 5 48.1/20/9611.0989/4.7046/21.93280.0028
615.8/11/190.0555/0.0392/0.06587.60 × 10 7 60.5/29/8347.4/23.0075/64.60110.0067
711/9/160.0437/0.0362/0.06834.62 × 10 7 66.8/41/73150.6369/93.0992/164.49440.0090
81326.5/557/20005.0864/2.0911/7.69933.13 × 10 5 63.9/54/65391.2597/329.9332/402.32620.0054
91715.5/975/20006.5727/3.6809/7.84035.84 × 10 5 55.2/44/58712.6175/570.5763/762.75780.0037
101015.1/52/18764.3262/0.2259/8.29326.67 × 10 7 53/53/531400.4/1383.3/1435.90.0063
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Q.; Liao, Q. Computing Nash Equilibria for Multiplayer Symmetric Games Based on Tensor Form. Mathematics 2023, 11, 2268. https://doi.org/10.3390/math11102268

AMA Style

Liu Q, Liao Q. Computing Nash Equilibria for Multiplayer Symmetric Games Based on Tensor Form. Mathematics. 2023; 11(10):2268. https://doi.org/10.3390/math11102268

Chicago/Turabian Style

Liu, Qilong, and Qingshui Liao. 2023. "Computing Nash Equilibria for Multiplayer Symmetric Games Based on Tensor Form" Mathematics 11, no. 10: 2268. https://doi.org/10.3390/math11102268

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop