Next Article in Journal
Evaluations of Some General Classes of Mordell Integrals by Applying the Mellin-Barnes-Type Contour Integration
Previous Article in Journal
Fixed-Point Theorems in Branciari Distance Spaces
Previous Article in Special Issue
Influence of Heat Transfer on Stress Components in Metallic Plates Weakened by Multi-Curved Holes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applications of Variational Analysis to Strong Nash Equilibria †

1
Independent Researcher, Huntsville, TX 77320, USA
2
Department of Mathematical Sciences, Northern Illinois University, 1425 W. Lincoln Hwy., DeKalb, IL 60115-2828, USA
*
Author to whom correspondence should be addressed.
This article is extended from the first author’s PhD dissertation.
Axioms 2025, 14(8), 634; https://doi.org/10.3390/axioms14080634 (registering DOI)
Submission received: 11 July 2025 / Revised: 8 August 2025 / Accepted: 12 August 2025 / Published: 14 August 2025
(This article belongs to the Special Issue Mathematical Methods in the Applied Sciences, 2nd Edition)

Abstract

Game theory problems have a wide array of applications and intricate structures. Because of this, there are different types of solution concepts available in the literature. In this work, strong Nash equilibria (a type of solution to game theoretic problems that extends Nash equilibria) are explored. Some new sufficient conditions for the existence of said solutions are put forth. An algorithm is also provided that, when convergent, will lead to a strong Nash equilibrium. Some tests to determine the practical efficiency of the algorithm are included.
MSC:
91A35; 91A80; 49J53; 91-08

1. Introduction

Game theory, pioneered in [1], has many suitable applications with many different frameworks and goals. For example, [2] developed a suitable type of strategy choice for players of non-cooperative games that came to be known as a Nash equilibrium, whereby no player can unilaterally deviate from the strategy to achieve a strictly better result for themself. Nash equilibria are known to exist under certain conditions: one such early example was from [3], where compact Hausdorff pure strategy spaces and continuous cost functions for a game imply the existence of a Nash equilibrium by the extension of the Kakutani fixed-point theorem. A natural extension for cooperative games was made in [4], where the strategy profile is one for which no coalition of players can collectively change their strategies to achieve strictly better results for all members of the coalition. This is known as a strong Nash equilibrium (SNE), and while there are no general results on the existence of an SNE, some work characterizing them for specific enviroments has been carried out. Recently, in [5], some specific game properties enabled the existence of an SNE. which provided some good results. Additionally, algorithms have been created for finding an SNE, as can be found in sources like [6,7]. These algorithms in [6,7] are considerably different to the one found in this paper. A few places where Nash equilibria are explored in relation to real-world problems by utilizing variational analysis are [8,9]. Additionally, another work for characterizing Nash equilibria using the variational inequality was carried out in [10].
The structure of this work proceeds as follows: First, a section on some necessary background information is given. Strong Nash equilibria are shown to be connected to efficiency in a particular case. Then, using the variational inequality, as defined in [10], it is shown that a strategy profile is an SNE when, for every coalition, a form of the variational inequality holds. A simple characterization of an SNE is given for two-player games. Finally, an algorithm is given that produces a sequence of strategies that, when convergent, converge to a strong Nash equilibrium. A simple example of the algorithm in use is then given. Tests of the algorithm are also performed and a summary is provided. A portion of the results were part of the PhD thesis of the first author while studying at Northern Illinois University [11].

2. Preliminaries and Notations

This work uses only real Euclidean vector spaces. The notations and definitions that follow in this section will be used throughout the paper:
Notation 1. 
Given the general vectors x = ( x 1 , , x n ) R n and y = ( y 1 , , y n ) R n , x = 0 refers to the vector x = ( 0 , 0 , , 0 ) , x = y implies x i = y i for all i values, x < y implies x i < y i for all i values, and x y implies x i y i for all i values.
Notation 2. 
For a function f ( x ) with input vector x = ( x 1 , , x n ) of length n and some set S { 1 , , n } , the gradient x i f ( x ) is understood to be the gradient of f restricted to the x i component of x (more than one component may be listed) and S f ( x ) is the gradient restricted to the component x j where j S . This rules is used when the gradient of f is to be investigated but certain entries of x are held fixed.
The structure of a problem and the solutions to that problem are as described by the following definition:
Definition 1. 
Let F ( x ) = ( f 1 ( x ) , , f m ( x ) ) be a vector-valued function and X R n . A problem is described by
( P ) Minimize : F ( x ) Subject to : x X .
This is a multi-objective optimization problem (MOOP). A feasible solution for (P) is any point x X . An efficient solution (sometimes called the Pareto efficient) for (P) is a feasible solution x with the property stating that there is no other feasible solution y X such that F ( y ) F ( x ) and F ( y ) F ( x ) . Identifying all the efficient points of a problem (P) is what is meant by “solving (P)”. Lastly, a feasible solution x is said to be a weakly efficient solution to (P) if there is no other y X such that F ( y ) < F ( x ) .
Efficiency first made its début in [12] and then was modernized in [13]. Further formalization that resulted in the form described here was provided in [14]. A good reference on variational analysis is [15].
With regard to game theory, a complete explanation of how a player will react or behave in a game given all the potential possible situations that the player may encounter is called a pure strategy. A mixed strategy is a vector from the simplex space of a dimension equal to the number of pure strategies available to a player, and that vector determines the probability that any of the potential pure strategies will be employed in a game for that player [16]. Mixed strategies are continuous in nature, and so they are the type of strategies under consideration in this work.
Definition 2. 
A game (G) is described by the triplet (G) = ( I , X , F ) with I = { 1 , , n } representing the n different players, while the space X = i = 1 n X i is referred to as the strategy space where each X i is the closed convex simplex Δ p i 1 R p i , which describes the i t h player’s collection of mixed strategies to choose from based on the p i potential pure strategies available to the player. Here, x i X i is called the i t h player’s strategy and x = ( x 1 , , x n ) X is known as a strategy profile for the game. The function F = ( f 1 , , f n ) is a multi-objective function where every i t h player is assigned f i : X R as a cost function that they wish to minimize. These types of games are multi-objective games in the sense that all the players have their own individual objective, not in the sense that each player has multiple objectives that they are tying to minimize for themselves simultaneously.
For the index set S { 1 , , n } , which will be thought of as a coalition of players, the function f i ( y S , x S ) is the same as f i with input x but with the specific values x j changed to y j for j S , where y X S = Π k S X k . A coalition can comprise any combination of players (including the coalition of all players), and all the members of the coalition will be working together to reduce the cost for all coalition members. For this reason, within a coalition, the multi-objective nature of the problems becomes a group of players working together to minimize all the objectives simultaneously. For clarity, replacement cases where the vectors are denoted with a subscript may have the subscript migrated to a superscript, such as f i ( y k S , x k S ) = f i ( y S k , x S k ) . Also, when only the i t h term is to be replaced, we write f i ( x i * , x i ) : = f i ( x 1 , , x i 1 , x i * , x i + 1 , , x n ) .
A Nash equilibrium (NE) is a point x = ( x 1 , , x n ) , where for all i, there is no x i * R for which f i ( x i * , x i ) < f i ( x ) . That is to say, a unilateral strategy change by a single player will not result in a better outcome for that player. A strong Nash equilibrium (SNE) is a point x = ( x 1 , , x n ) where, given any coalition S { 1 , , n } , there is no other strategy profile x S * X S for which f i ( x S * , x S ) < f i ( x ) for all i S . So, an SNE is a strategy profile where no coalition can make a group change to their strategies, which would result in a strictly better result for everyone in the coalition, including the coalition of all players.
It is a well-known fact that every SNE is automatically an NE and weakly efficient (consider the coalition of all players).

3. New Results for Strong Nash Equilibria

To begin with, a connection can be made between strong Nash equilibria and the concept of efficiency in a specific case where every player shares a function component underlying their specific composite cost function.
Proposition 1. 
If x = ( x 1 , , x n ) is an SNE for the game (G) with f i ( x ) = g i ( h ( x ) ) for all i and the functions g i are linear and non-constant (i.e., g i ( x ) = m i x + b i ) and h ( x ) : X R , then it is also efficient for the MOOP of minimizing f = ( f 1 , , f n ) .
Proof. 
If x is not efficient, then there is a y = ( y 1 , , y n ) with at least one i where f i ( y ) < f i ( x ) along with f j ( y ) f j ( x ) for all other j i . Since f i ( y ) < f i ( x ) , then h ( x ) h ( y ) and x y . Since all g i are linear and non-constant, for any given j, f j ( y ) f j ( x ) . So, for all j, f j ( y ) < f j ( x ) . Thus, x is not weakly efficient and, thus, it cannot be an SNE. So, being an SNE for the game (G) implies efficiency for the given MOOP by way of contraposition.    □
Some definitions for characterizing strategy profiles that are Nash equilibria are the concepts of a normal cone and the variational inequality, as described in [10]. A vector v is a normal vector to X at x if and only if v · ( x x ) 0 for all x X . This is written as v N X ( x ) , so N X is a set-valued mapping described by gph N X = { ( x , v ) : x X and v N X ( x ) } . In general, for a map f, the variational inequality for f and X is
f ( x ) N X ( x ) that is to say x X and f ( x ) · ( x x ) 0 for all x X .
In [10], it was determined that a strategy profile x is an NE for a game with convex cost functions if and only if x i f i ( x i , x i ) N X i ( x i ) for all i. This concept can be extended to an SNE.
Theorem 1. 
Let (G) be a game with strategy space X and cost functions F = ( f 1 , , f n ) . If f i is convex for all i { 1 , , n } and x ¯ = ( x ¯ 1 , , x ¯ n ) X = i { 1 , , n } X i has the property that for all S { 1 , , n } there is a j S such that
S f j ( x ¯ S , x ¯ S ) N X S ( x ¯ S ) ,
then x ¯ is an SNE.
Proof. 
If we assume x ¯ is not an SNE, then there is a coalition S * and strategy profile x S * * X S * such that f i ( x S * * , x ¯ S * ) < f i ( x ¯ S * , x ¯ S * ) for all i S * . But since f i is convex for all i S * , then
f i ( x ¯ S * , x ¯ S * ) · ( x S * * x ¯ S * ) f i ( x S * * , x ¯ S * ) f i ( x ¯ S * , x ¯ S * ) < 0 .
This means that f i ( x ¯ S * , x ¯ S * ) N X S * by definition. Therefore, x ¯ must be an SNE.    □
Example 1. 
Let
f 1 ( x ) = x T 2 1 0 1 1 0 0 0 1 x + 1 2 0 1 2 x , f 2 ( x ) = x T 1 1 0 1 2 1 0 1 1 x , and f 3 ( x ) = x T 1 0 0 0 1 1 0 1 2 x
be the cost functions for a game (G) with strategy space X = { ( x 1 , x 2 , x 3 ) R 3 : 1 x 1 , x 2 , x 3 1 } . Now, instead of an in-depth consideration of a player’s strategy, coalitions, and oppositions, we can simply observe all the coalitions’ relevant gradients and reach a conclusion about what points must be an SNE. Observe the restricted gradients
   
x 1 f 1 ( x ) = 4 x 1 + 2 x 2 + 1 2 , 0 , 0  
x 2 f 2 ( x ) = ( 0 , 2 x 1 + 4 x 2 + 2 x 3 , 0 )  
x 3 f 3 ( x ) = ( 0 , 0 , 2 x 2 + 4 x 3 )  
x 1 , x 2 f 1 ( x ) = 4 x 1 + 2 x 2 + 1 2 , 2 x 1 + 4 x 2 , 0  
x 1 , x 3 f 1 ( x ) = 4 x 1 + 2 x 2 + 1 2 , 0 , 2 x 3 + 1 2  
x 1 , x 2 f 2 ( x ) = ( 2 x 1 + 2 x 2 , 2 x 1 + 4 x 2 + 2 x 3 , 0 )  
x 2 , x 3 f 2 ( x ) = ( 0 , 2 x 1 + 4 x 2 + 2 x 2 , 2 x 2 + 2 x 3 )  
x 1 , x 3 f 3 ( x ) = ( 2 x 1 , 0 , 2 x 2 + 4 x 3 )  
x 2 , x 3 f 3 ( x ) = ( 0 , 2 x 2 + 2 x 3 , 2 x 2 + 4 x 3 )  
x 1 , x 2 , x 3 f 1 ( x ) = 4 x 1 + 2 x 2 + 1 2 , 2 x 1 + 4 x 2 , 2 x 3 + 1 2  
x 1 , x 2 , x 3 f 2 ( x ) = ( 2 x 1 + 2 x 2 , 2 x 1 + 4 x 2 + 2 x 3 , 2 x 2 + 2 x 3 )  
x 1 , x 2 , x 3 f 3 ( x ) = ( 2 x 1 , 2 x 2 + 2 x 3 , 2 x 2 + 4 x 3 )
   
From this we can work out what an SNE might be by going through all the coalitions S and making sure that S f j ( x ¯ S , x ¯ S ) N X S ( x ¯ S ) for at least some j S . After looking at the single-player coalitions S = 1 , 2, or 3, it can be seen that x ¯ S = ( 1 , 1 , 1 ) is a suitable vector for which S f j ( x ¯ S , x ¯ S ) N X S ( x ¯ S ) . Indeed, when S = { 1 } for all x = ( x 1 , x 2 , x 3 ) X = [ 1 , 1 ] 3 ,
x 1 f 1 ( ( 1 , 1 , 1 ) ) · ( x ( 1 , 1 , 1 ) ) = 1.5 ( x 1 + 1 ) 0
so x 1 f 1 ( ( 1 , 1 , 1 ) ) N X 1 ( ( 1 , 1 , 1 ) ) .
When S = { 2 } , since x 2 f 2 ( ( 1 , 1 , 1 ) ) = 0 , then it must be true that x 2 f 2 ( ( 1 , 1 , 1 ) ) N X 2 ( ( 1 , 1 , 1 ) ) . Lastly, similar to the S = { 1 } case, for S = { 3 } ,
x 3 f 3 ( ( 1 , 1 , 1 ) ) · ( x ( 1 , 1 , 1 ) ) = 2 ( x 3 + 1 ) 0 .
When more players are involved, we only need to find one such gradient per coalition in the normal cone. So, for S = { 1 , 2 } , since
x 1 , x 2 f 2 ( ( 1 , 1 , 1 ) ) · ( x ( 1 , 1 , 1 ) ) = ( 0 , 0 , 0 ) · ( x 1 + 1 , x 2 1 , x 3 + 1 ) 0
then x 1 , x 2 f 2 ( ( 1 , 1 , 1 ) ) N X { 1 , 2 } ( ( 1 , 1 , 1 ) ) .
For S = { 1 , 3 } , notice that
x 1 , x 2 f 3 ( ( 1 , 1 , 1 ) ) · ( x ( 1 , 1 , 1 ) ) = ( 2 , 0 , 2 ) · ( x 1 + 1 , x 2 1 , x 3 + 1 ) = 2 ( x 1 + 1 ) + 2 ( x 3 + 1 )
and 2 ( x 1 + 1 ) + 2 ( x 3 + 1 ) 0 for all x X = [ 1 , 1 ] 3 , so x 1 , x 2 f 3 ( ( 1 , 1 , 1 ) ) is in the normal cone.
For S = { 2 , 3 } , x 2 , x 3 f 2 ( ( 1 , 1 , 1 ) ) = ( 0 , 0 , 0 ) , so again it will be in the normal cone. Lastly, for S = { 1 , 2 , 3 } , notice that x 1 , x 2 , x 3 f 2 ( ( 1 , 1 , 1 ) ) = ( 0 , 0 , 0 ) , so it will also be in the normal cone.
Since, at some point, every coalition has some player’s cost function’s negative gradient in the normal cone of the coalition, Theorem 1 says that that point is an SNE. So, ( 1 , 1 , 1 ) is an SNE.
The converse of Theorem 1 is not necessarily true. If x ¯ is an SNE, then f j ( x ¯ S , x ¯ S ) does not need to be in N X S ( x ¯ S ) for every S { 1 , , n } . This can be checked in as restrictive a case as a two-player game with cost functions f 1 ( x ) = ( 1 , 1 ) · x and f 2 ( x ) = ( 1 , 1 ) · x and a strategy space that is the box X = { ( x 1 , x 2 ) R 2 : 0 x 1 , x 2 1 } .
Another proposition is provided that generalizes Theorem 1 in the context of two-player games. In fact, it gives a characterization of an SNE, and an example is also provided for describing such an SNE.
Proposition 2. 
Let (G) be a game with the convex strategy space X = X 1 × X 2 and cost functions F = ( f 1 , f 2 ) , where each f i is convex and differentiable. Then, the following statement holds:
A pair ( x ¯ 1 , x ¯ 2 ) X 1 × X 2 is an SNE if and only if λ = ( λ 1 , λ 2 ) R + 2 { ( 0 , 0 ) } such that
f λ ( x ¯ 1 , x ¯ 2 ) N X 1 ( x ¯ 1 ) × N X 2 ( x ¯ 2 )
where f λ = λ 1 f 1 + λ 2 f 2 , and
x i f i ( x ¯ 1 , x ¯ 2 ) N X i ( x ¯ i ) f o r i = 1 , 2 .
Proof. 
As each f i is convex, ( x ¯ 1 , x ¯ 2 ) is an SNE if and only if it is an NE and a weakly efficient solution for min x X F . But ( x ¯ 1 , x ¯ 2 ) being an NE and a weakly efficient solution is equivalent to having (2) and (1) hold, respectively.    □
Example 2. 
Let f 1 ( x 1 , x 2 ) = x 1 2 + x 2 2 + x 1 x 2 , f 2 ( x 1 , x 2 ) = x 1 2 + x 2 2 + x 2 x 1 , and X 1 = X 2 = [ 0 , 1 ] . For ( x ¯ 1 , x ¯ 2 ) = ( 0 , 0 ) , it is easy to see that (2) holds, and f λ ( 0 , 0 ) N [ 0 , 1 ] ( 0 ) × N [ 0 , 1 ] ( 0 ) with λ = ( 1 , 1 ) , i.e., (1) holds. Based on Proposition 2, ( 0 , 0 ) is an SNE. But ( 0 , 0 ) does not satisfy the condition in Theorem 1.

4. Algorithm for Finding SNE

The algorithm in this work is of the same nature as other projected gradient methods like the ones found in [17,18] for multi-objective optimization. This algorithm differs by using the arg min function over all coalitions and those coalitions’ potential choices of strategies. So, the direction of descent that the algorithm seeks at any step is one heading towards the weakly Pareto efficient solution of a problem restricted to cost functions from a coalition (see Algorithm 1).
Algorithm 1 Locating a strong Nash equilibrium for convex games
Require: 
Strategy space X = Π i { 1 , , n } X i with all X i convex, closed, and bounded, cost function F with convex components, initial guess x 0 X   k = 0 , β ( 0 , 1 )
1:
while  x k is not an SNE do
2:
    
( S k , d k ) : = a r g m i n S { 1 , , n } , d X S x k d 2 2 + β max j S { f j ( x S k , x S k ) · d }
(If there exists more than one argmin, choose the ( S k , d k ) that corresponds to the lexicographical minimum of the first component, the index sets S k .)
3:
    
j k : = a r g m a x j S k d k 2 2 + β f j ( x S k k , x S k k ) · d k
(choose j k to be the smallest index if arg max contains more than one element.)
4:
     α k : = a r g m i n α R + { f j k ( x k + α d k ) such that x k + α d k X and f i ( x k + α d k ) f i ( x k ) 0 for all i S k }
5:
     x k + 1 : = x k + α k d k
6:
     k = k + 1
7:
end while
8:
if  x k is an SNE then
9:
    Set x ¯ = x k
10:
else[a sequence ( x k ) k N was created]
11:
    if  ( x k ) k N converges then
12:
        Let x ¯ be such that ( x k ) k N x ¯
13:
    else[ ( x k ) k N does not converge]
14:
        return “Algorithm Failed”
15:
    end if
16:
end if
17:
return  x ¯
When F is a one-dimensional function, the algorithm works similar to a gradient descent method with a direct calculation of the step size. The additional d 2 2 term is included to ensure that the minimum exists by making the argument strongly convex.
The first thing to be proven is that such step sizes exist.
Proposition 3. 
The step size α k calculated in Algorithm 1 exists for all k N .
Proof. 
Since d = 0 is a candidate for (3) from Algorithm 1, the minimum will have to be less than or equal to 0. If d = 0 , then any α , for example, α = 0 , would be sufficient as a value that minimizes the function.
Now, for d 0 , if
d 2 2 + β max j S { f j ( x S k , x S k ) · d } = 0
then consider d 2 in place of d,
d 2 2 2 + β max j S { f j ( x S k , x S k ) · d 2 } = d 2 4 + d 2 2 + β max j S { f j ( x S k , x S k ) · d } 2 = d 2 4 + 0 2 = d 2 8 < 0 .
Now, because d k is a minimizer, it must be true that
d k 2 2 + β max j S k { f j ( x S k , x S k ) · d k } < 0 ,
and so f j ( x S k , x S k ) · d k < 0 for all j S k . So, d k acts as a direction of descent heading away from x k for f j for all j S k .
Observing the constraints of the minimization problem defining α , since X is convex, closed, and bounded, there is a maximum α such that x k + α d k X . Going further, because all the functions f i are convex and d k is a direction of descent away from x k for all f i with i S k , there is some interval [ 0 , a ] in which f i ( x k + α d k ) f i ( x k ) 0 for all i S k and α [ 0 , a ] .
Now, because f j k is convex, it will be continuous on any line segment in its domain. And because x k + α d k for α [ 0 , a ] defines a line segment, a minimum can be obtained for f j k on that segment using the extreme-value theorem. Note that α cannot be 0 because f j k ( x k ) > f j k ( x k + α d k ) for at least one α in ( 0 , a ] . □
When x k is an SNE, the algorithm will calculate d k to be 0 over and over again, so it should be noted that the check statement at the beginning of the “while” loop may not be necessary.
In order to show that the convergence of the algorithm does actually imply that an SNE has been located, more lemmas will be required. The next lemma shows that there is an implication in convergence between the directions of descent and the indices giving the least descent from the coalitions that provide that direction.
Lemma 1. 
If d k converges to d and x k converges to x ¯ , then j k , as described in (4), converges.
Proof. 
Since
lim k d k 2 2 + β f j ( x S k k , x S k k ) · d k = d 2 2 + β f j ( x ¯ ) · d
for all j, then
lim k j k = min j arg max j S d 2 2 + β f j ( x ¯ ) · d ,
where S is the collection of indices of non-zero components of d. This is because if all the limits exist, then the limit of the maximums is the maximum of the limits. Here, the minimum is only being used since in the calculation of j k , if there is more than one element given in the arg max from (4), the smallest index of all the j S k that give the maximum is chosen. □
For the next lemma, it will be shown that if the directions converge to 0, then if the algorithm is convergent, it actually finds a strong Nash equilibrium.
Lemma 2. 
Assuming that d k 0 and x k converges from Algorithm 1, then the algorithm converges to a strong Nash equilibrium.
Proof. 
Instead consider when x k x ¯ but that x ¯ is not an SNE. Since every component f i of F is convex from the requirements of Algorithm (1), there are an S * and x * X S * such that for each i S *
f i ( x ¯ S * , x ¯ S * ) · ( x * x ¯ S * ) < 0 .
Convexity is necessary because without that condition, it could end up in a situation where f i ( x ¯ S * , x ¯ S * ) = 0 . But this is the same as
f i ( x ¯ ) · v < 0
for all i S * where v = x x ¯ S * . Now, take a small enough α > 0 so that
α v 2 2 + β α f i ( x ¯ ) · v < 0
for all i S * and pass α v into v.
Take a δ > 0 so that
v 2 2 + β f i ( x ¯ ) · v < δ < 0
for all i S * , and then take ϵ = δ 4 and a large enough K > 0 so that | f i ( x ¯ ) · v f i ( x k ) · v | < ϵ for all k > K . In that case,
v 2 2 + β f i ( x k ) · v v 2 2 + β f i ( x ¯ ) · v + ϵ < ϵ δ < δ 2 < 0
for all k > K and i S * . But since d k X S k is a minimizer,
d k 2 2 + β max i S k { f i ( x k ) · d k } v 2 2 + β max i S * { f i ( x k ) · v } δ 2 < 0 .
But because δ is fixed, d k 0 . So, if d k 0 and x k x ¯ , then it must be true that x ¯ is a strong Nash equilibrium. □
Moving on, we next discuss another lemma that will be helpful, as it shows that under certain conditions { d k } cannot converge to a non-zero vector.
Lemma 3. 
If all the cost functions f j are continuously differentiable and α k 0 and x k converges to x ¯ from Algorithm 1, then either d k converges to 0 or it does not converge at all.
Proof. 
Assume d k d 0 . Note that
lim k f j k ( x k ) · d k 0 .
Indeed, otherwise, the arg min statement that defines d k would give a positive value as the minimum because d k d > 0 . But that is not possible because the zero vector is always a possible choice for d k , and that in turn would return 0 as the minimum value. Also, note that as α k is defined as a minimizer for a continuously differentiable function (that is, the parameterized function f j k ( x k + α d k ) with parameter α ), f j k ( x k + α k d k ) · d k = 0 for all k.
Using Lemma 1, let j * = lim k j k . Now, take b so that
lim k f j k ( x k ) · d k = f j * ( x ¯ ) · d = b < 0 .
This must be non-zero or else d k would have to converge to 0.
Since x k converges, it must be Cauchy, and so taking ϵ = b 2 d , a K > 0 can be chosen that is large enough so that
f j k ( x k + α k d k ) f j k ( x k ) < ϵ = b 2 d .
So, for all k > K , this gives
f j k ( x k ) · d k = f j k ( x k + α k d k ) f j k ( x k ) · d k = | cos ( w k ) | f j k ( x k + α k d k ) f j k ( x k ) d k < b 2 d d k ,
where w k is the angle between f j k ( x k + α k d k ) f j k ( x k ) and d k . The limit is taken on both sides of the inequality, which yields
| b | = lim k f j k ( x k ) · d k lim k b 2 d d k = b 2 ,
which is a contradiction as a non-zero b cannot be less than or equal to half of itself. So, simply put, d k d 0 . □
All of these lemmas finally lead to some conditions for which the convergence of the algorithm will lead to an SNE.
Theorem 2. 
With regard to Algorithm 1, when X is compact and all f i are convex and continuously differentiable, then x k converging to x ¯ implies that x ¯ must be an SNE.
Proof. 
The space X being compact implies it is closed and bounded. Now, because x k x ¯ and x k + 1 = x k + α k d k , either α k 0 or d k 0 . However, Lemma 2 shows that d k 0 implies that x k will converge to an SNE. For the case when α k 0 , Lemma 3 shows that either d k 0 or d k does not converge. Again, for the first case, d k 0 will imply that x k converges to an SNE. So, it only remains to consider if d k does not converge. But because X is compact, { d k } k N has a convergent sub-sequence, known as { d k i } i N . Since d k i is convergent in X, it will either converge to 0 or some other point d within the bounds of X.
Case 1: d k i 0 .
Assuming that x ¯ is not an SNE (otherwise, the theorem would be proven), there must be a coalition S * and at least one direction d * for which
d * 2 2 + β arg max j S * f j ( x ¯ ) · d * = b < 0 .
The direction d * can indeed be found because there is some d where β f j ( x ¯ ) · d c < 0 for c : = β arg max j S * f j ( x ¯ ) · d for all j S * . That d can then be scaled down to d * = a d with 0 < a < c d 2 and a 1 , which ensures
d * 2 2 + β arg max j S * f j ( x ¯ ) · d * = a 2 d 2 2 + a β arg max j S * f j ( x ¯ ) · d < a c d 2 d 2 2 + a c = a c 2 < 0 .
We then take that d * and also b : = a c as defined to make certain that the inequality on line (5) holds.
Let ϵ = b 2 d * and choose a large enough K > 0 so that for all j { 1 , , n } , f j ( x ¯ ) f j ( x k i ) < ϵ for all i > K . Using this ϵ inequality, along with the fact that d k i is defined as a minimizer from line (3) in Algorithm 1, gives
d k i 2 2 + β arg max j S k i f j ( x S k i k i , x S k i k i ) · d k i < d * 2 2 + β arg max j S * f j ( x S * k i , x S * k i ) · d * < d * 2 2 + β arg max j S * f j ( x ¯ ) · d * + β ϵ d * = b + ϵ d * < b + b 2 d * d * = b 2 < 0 .
Since d k i 0 , assuming that the limit as k i goes to infinity on this whole inequality reveals the contradiction that 0 b 2 < 0 , which is impossible. So, x ¯ would have to be an SNE.
Case 2: d k i d 0 .
This argument’s approach is the same as that defined in Lemma 3 but on the sub-sequence d k i instead of d k . Since d k i d 0 , it must be true that lim i f j k i ( x k i ) · d k i = f j * ( x ¯ ) · d = b < 0 , where j * = lim i j k i from Lemma 1 using the sub-sequence d k i in place of d k .
The original assumption that x k converges implies that x k i is Cauchy, so assuming that ϵ = | b | 2 d , there will be a large enough K > 0 so that for all i > K ,
f j ( x k i + 1 ) f j ( x k i ) < ϵ .
It must also be noted by the definition of d k i that f j k i ( x k i + α k i d k i ) · d k i = 0 . By using the Cauchy–Schwarz inequality,
f j k i ( x k i ) · d k i = f j k i ( x k i + α k i d k i ) f j k i ( x k i ) · d k i f j k i ( x k i + α k i d k i ) f j ( x k i ) d k i < ϵ d k i .
Again, a limit is taken on this inequality, which gives
| b | = f j * ( x ¯ ) · d = lim i f j k i ( x k i ) · d k i lim i ϵ d k i = ϵ d = | b | 2 d d = | b | 2 ,
which is a contradiction as b 0 , so d k i d . □
So, whenever the algorithm converges, it will be to a strong Nash equilibrium. It can be seen when the algorithm is applied to an example game without an SNE that multiple cluster points will be produced [11]. To show the algorithm in action, an example is given.
Example 3. 
Consider the following problem:
X = [ 0 , 1 ] 3 , F : X R 3 with F = ( f 1 , f 2 , f 3 ) , f 1 ( y 1 , y 2 , y 3 ) = y 1 ( y 2 y 3 ) + y 1 , f 2 ( y 1 , y 2 , y 3 ) = y 2 ( y 3 y 1 ) + y 2 , and f 3 ( y 1 , y 2 , y 3 ) = y 3 ( y 1 y 2 ) + y 3 .
Note that the cost functions are convex and there is an SNE at ( 0 , 0 , 0 ) .
Consider Algorithm 1 using β = 1 2 and apply it to the initial guess x 0 = 1 2 , 1 2 , 1 2 ; then,
f 1 x 0 = 1 , 1 2 , 1 2 , f 2 x 0 = 1 2 , 1 , 1 2 and f 3 x 0 = 1 2 , 1 2 , 1 .
All of the gradients have a similarity and so, when generalizing, there are only three types of cases that need to be checked. These cases are similar to when S 0 is { 1 } , { 1 , 2 } , or { 1 , 2 , 3 } . The easiest case to check is when player 1 is acting on their own, which is when S 0 = { 1 } . After simplifying, this is carried out by finding d 1 in
arg min d 1 1 2 , 1 2 d 2 2 + 1 2 d ,
which gives d 1 = 1 2 , that is to say, the direction of descent is d = 1 2 , 0 , 0 , which gives the corresponding minimum of 1 8 .
When S 0 = { 1 , 2 } , then d 1 and d 2 need to be found in
arg min d 1 , d 2 1 2 , 1 2 d 1 2 + d 2 2 2 + 1 2 max d 1 + d 2 2 , d 1 2 + d 2 ,
which ends up giving the direction d = 1 8 , 3 8 , 0 and a corresponding minimum of 5 64 .
Lastly, when S 0 = { 1 , 2 , 3 } , d 1 , d 2 , and d 3 all need to be found from
arg min d 1 , d 2 , d 3 1 2 , 1 2 d 1 2 + d 2 2 + d 3 2 2 + 1 2 max d 1 + d 2 2 d 3 2 , d 1 2 + d 2 + d 3 2 , d 1 2 d 2 2 + d 3 ,
which gives the direction of descent as d = 1 6 , 1 6 , 1 6 and a corresponding minimum of 1 24 .
From these three cases, the direction that actually minimizes the step in the algorithm is 1 2 , 0 , 0 as 1 8 is the smallest corresponding minimum, and α 0 is found to be 1, so x 1 = 0 , 1 2 , 1 2 . The coalition of the first player is the one that the step uses but only because it is lexicographically first compared to the similar single-player coalitions { 2 } and { 3 } that we generalized.
For brevity, after that, the next direction is 0 , 1 2 , 0 and x 2 = 0 , 0 , 1 2 . Then, the last descent direction is 0 , 0 , 1 2 , which gives the SNE x 3 = ( 0 , 0 , 0 ) .
While this may seem like a drawn out process, each step is relatively easy to compute for many convex problems using computer programming. However easy it may be to calculate, it can be shown that Algorithm 1 may still become stuck in an endless loop even for convex functions with an SNE present [11].

5. Testing

Tests were programmed using Mathematica to determine some metrics on the algorithm performance. The problem types designed for testing were games with convex quadratic cost functions where the quadratic coefficient was generated by taking a random matrix with entries from 10 to 10 and multiplying it by its transpose to obtain a positive semi-definite matrix, and the linear coefficient was taken as a random vector with entries from 10 to 10. The strategy space was determined by giving each player a random interval from 10 to 10 with integer endpoints. The stopping criteria was to run beyond the maximum number of iterations (200 unless noted otherwise), step to a solution that has already been found that will result in a loop, or have the last two consecutive solutions within a convergence tolerance (0.00001 unless otherwise noted).
Table 1 describes the performance of the algorithm in different contexts. Column “n” gives the number of players possible for the random games, “ # P r o b ” gives the number of random games the algorithm was run on, and “ # x 0 ’s” gives the number of random start points for every random game. The “ α k ” column indicates if it is calculated as described in the algorithm or if it is fixed at 1. “Conv” is the count of convergent runs. “Conv It” is the average number of iterations that the algorithm took to converge. “Loops” gives the number of runs for which the algorithm ended up in a loop, with “Loop It” being the average number of iterations that it took a run to fall into a loop. “Fails” is the number of times the algorithm did not converge or ended up in a loop by the maximum number of iterations.
As the table shows, whether α k is calculated or not, the number of iterations required to find convergence is low when the algorithm converges, often taking only five to ten steps. It should also be noted that the algorithm may still fall into loops even when an SNE strategy is available, and it take more iterations to end up in those loops.
In this test, α k = 1 was considered for the practical purpose that it significantly reduced the calculation time by removing a minimization algorithm that was built into the Mathematica tool set. The pairs tallied with the same problems and start points were used to further investigate the difference found between α k = 1 and when α k was calculated. The pairs that may cause some confusion are in the algorithm tests with increased convergence tolerance. The increased tolerance caused some issues whereby the program recorded convergences as loops due to the fact that α k was calculated and coming out as practically 0 (values like 3.1235 × 10 9 ). The record keeping system that was programmed could not differentiate convergence from loop due to the small values and lumped many convergences in with the loops. This is also evident by the extremely low values in the loop iteration column. After reviewing the data, there were loops but not nearly as often as indicated on the table. So, this data needs to be carefully considered.
This data revealed some good practical techniques in execution. When using this algorithm, it should first be run using a fixed α k = 1 to initially obtain a quick check for convergence. If convergence failure occurs when α k = 1 , the algorithm could then be run again using the line search method laid out in the original algorithm, which will have longer a computing time and require more iterations, but it can more often succeed in finding an SNE and keep the algorithm from ending up in loops. A few random start positions can be used if convergence initially fails. Care needs to be taken with near-0 calculations. And lastly, the maximum number of iterations can be lowered significantly.

6. Conclusions

Often times, players in game theoretic problems are allowed to collaborate, and in games where there is not any particular resentment among players, there may be a desire to find outcomes that befit as many players as possible. In such games, a strong Nash equilibrium may be desirable, as this is the sort of solution where no amount of scheming can yield better results for any players. While some readers may be initially dismissive of strong Nash equilibria as being too restrictive, rare, or hard to identify, the results in this paper indicate that they can potentially be identified with relatively low effort and so seeking an SNE should not be overlooked. Considering that an SNE could be a unanimously favored solution for a problem if it exists, investing a small amount of time in seeking one would not be time wasted. What was carried out in this paper was a further exploration of the concept of strong Nash equilibria using modern methods from variational analysis. Also, a new algorithm was developed and tested for aiding in findings SNEs.
First, the concept of strong Nash equilibria was connected to the concept of efficiency under certain circumstances. Next, the concepts of the normal cone and variational inequality were used to find a sufficient condition for showing a solution to be an SNE. A characterization of two-player games using the variational inequality was also provided. The algorithm for finding an SNE was developed using a projected gradient method by calling the arg min function over all coalitions of players possible. It was shown that when the algorithm converges, it does so to an SNE. Some tests were performed to check the practical efficiency of the algorithm.
Further research can be carried out to explore the convergence rate of this sort of projected gradient algorithm. The relatively quick convergence of the test results, while interesting, is no proof, but it does make one hopeful that the convergence is not too slow. Additionally, the computational complexity of this algorithm and some further test results and analyses of the algorithm could be of interest. It would also be prudent to compare the algorithm results with other algorithms for calculating strong Nash equilibria like those in [6,7]
Another direction for advacning this research could be creating an SNE algorithm based off the weighted best reply, as mentioned in [5]. That algorithm may be as simple as just finding the best reply to x k and moving there. Then, the fact that a fixed best reply is in fact an SNE for convex problems may make it likely that the algorithm converging results in an SNE. Hopefully, that type of algorithm would always converge for problems that have the coalition consistency principle.

Author Contributions

Writing—original draft, G.H. and S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The authors declare that the code and generated data supporting the findings of this study are available at https://doi.org/10.7910/DVN/LRA8EK (accessed on 7 November 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Neumann, J.V. Zur theorie der gesellschaftsspiele. Math. Ann. 1928, 100, 295–320. [Google Scholar] [CrossRef]
  2. Nash, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
  3. Glicksberg, I.L. A further generalization of the kakutani fixed point theorem, with application to nash equilibrium points. Proc. Am. Math. Soc. 1952, 3, 170–174. [Google Scholar] [CrossRef]
  4. Aumann, R.J. Acceptable points in general cooperative n-person games. In Contributions to the Theory of Games (AM-40); Princeton University Press: Princeton, NJ, USA, 1959; Volume 4, pp. 287–324. [Google Scholar]
  5. Nessah, R.; Tian, G. On the existence of strong Nash equilibria. J. Math. Anal. Appl. 2014, 414, 871–885. [Google Scholar] [CrossRef]
  6. Gatti, N.; Rocco, M.; Sandholm, T. Algorithms for strong nash equilibrium with more than two agents. In Proceedings of the AAAI Conference on Artificial Intelligence, Bellevue, WA, USA, 14–18 July 2013; Volume 27, pp. 342–349. [Google Scholar]
  7. Gatti, N.; Rocco, M.; Sandholm, T. On the verification and computation of strong nash equilibrium. arXiv 2017, arXiv:1711.06318. [Google Scholar] [CrossRef]
  8. Facchinei, F.; Pang, J.-S. Chapter 12 Nash equilibria: The variational approach. In Convex Optimization in Signal Processing and Communications; CRC Press: Boca Raton, FL, USA, 2010; pp. 443–493. [Google Scholar]
  9. Scutari, G.; Palomar, D.P.; Facchinei, F.; Pang, J.-S. Convex optimization, game theory, and variational inequality theory. IEEE Signal Process. Mag. 2010, 27, 35–49. [Google Scholar] [CrossRef]
  10. Rockafellar, R.T. Applications of convex variational analysis to Nash equilibrium. In Proceedings of the 7th International Conference on Nonlinear Analysis and Convex Analysis, Busan, Republic of Korea, 2–5 August 2011; pp. 173–183. [Google Scholar]
  11. Harris, G. Providing Better Choices: An Exploration of Solutions in Multi-Objective Optimization and Game Theory Using Variational Analysis. Ph.D. Thesis, Northern Illinois University, DeKalb, IL, USA, 2020. [Google Scholar]
  12. Pareto, V. Manual of Political Economy; English Translation of the 1909 French Edition of the 1906 Italian Manuale d’Economia Politica con una Introduzione alla Scienza Sociale; Società Editrice Libraria: Milano, Italy; Augustus M. Kelley: New York, NY, USA, 1909. [Google Scholar]
  13. Koopmans, T.C. Efficient allocation of resources. Econom. J. Econom. Soc. 1951, 455–465. [Google Scholar] [CrossRef]
  14. Kuhn, H.W.; Tucker, A.W. Nonlinear programming. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 31 July–12 August 1950; pp. 481–492. [Google Scholar]
  15. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer-Verlag: New York, NY, USA, 1998. [Google Scholar]
  16. Morgenstern, O.; Von Neumann, J. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1953. [Google Scholar]
  17. Drummond, L.G.; Iusem, A.N. A projected gradient method for vector optimization problems. Comput. Optim. Appl. 2004, 28, 5–29. [Google Scholar] [CrossRef]
  18. Fliege, J.; Svaiter, B.F. Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
Table 1. Algorithm 1 results.
Table 1. Algorithm 1 results.
200 Iterations Max Unless Indicated Otherwise
n # Prob#  x 0 ’s α k ConvConv ItLoopsLoop ItFails
2-6501Calc1410235913
2-6501154.6459.6 2 ¯ 0
2501Calc46 7.22427.50
2501127 4.1923 5.170
450117 5.4343 7.860
650112548 12.600
All Subsequent Pairs Tallied with same Problems and Start Points
31100Calc29 7.9371 87.030
3110010N/A10021.260
Subsequent Pair used a Convergence Tolerance of 0.0000001
250200Calc6397 6.803374 2.05229
25020019096 3.14331 19.78573
Subsequent Pair used 300 Iterations Max
41200Calc0N/A11 205.01189
4120010N/A200 4.780
Subsequent Pair used 300 Iterations Max and a Conv. Tol. of 0.0000001
32020Calc225 13.8149 2.04126
320201237 4.5756 39.77107
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Harris, G.; Deng, S. Applications of Variational Analysis to Strong Nash Equilibria. Axioms 2025, 14, 634. https://doi.org/10.3390/axioms14080634

AMA Style

Harris G, Deng S. Applications of Variational Analysis to Strong Nash Equilibria. Axioms. 2025; 14(8):634. https://doi.org/10.3390/axioms14080634

Chicago/Turabian Style

Harris, Glenn, and Sien Deng. 2025. "Applications of Variational Analysis to Strong Nash Equilibria" Axioms 14, no. 8: 634. https://doi.org/10.3390/axioms14080634

APA Style

Harris, G., & Deng, S. (2025). Applications of Variational Analysis to Strong Nash Equilibria. Axioms, 14(8), 634. https://doi.org/10.3390/axioms14080634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop